Test Report: QEMU_macOS 19370

                    
                      dd51e72d60a15da3a1a4a8c267729efa6313a896:2024-08-06:35671
                    
                

Test fail (94/278)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.45
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10
55 TestCertOptions 10.43
56 TestCertExpiration 195.45
57 TestDockerFlags 10.25
58 TestForceSystemdFlag 10.07
59 TestForceSystemdEnv 10.84
104 TestFunctional/parallel/ServiceCmdConnect 28.5
176 TestMultiControlPlane/serial/StopSecondaryNode 312.27
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.11
178 TestMultiControlPlane/serial/RestartSecondaryNode 305.23
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.59
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
183 TestMultiControlPlane/serial/StopCluster 208.7
186 TestImageBuild/serial/Setup 10.23
189 TestJSONOutput/start/Command 9.75
195 TestJSONOutput/pause/Command 0.07
201 TestJSONOutput/unpause/Command 0.04
218 TestMinikubeProfile 10.1
221 TestMountStart/serial/StartWithMountFirst 10.03
224 TestMultiNode/serial/FreshStart2Nodes 9.89
225 TestMultiNode/serial/DeployApp2Nodes 106.67
226 TestMultiNode/serial/PingHostFrom2Pods 0.09
227 TestMultiNode/serial/AddNode 0.07
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.07
230 TestMultiNode/serial/CopyFile 0.06
231 TestMultiNode/serial/StopNode 0.13
232 TestMultiNode/serial/StartAfterStop 49.58
233 TestMultiNode/serial/RestartKeepsNodes 8.41
234 TestMultiNode/serial/DeleteNode 0.1
235 TestMultiNode/serial/StopMultiNode 3.98
236 TestMultiNode/serial/RestartMultiNode 5.25
237 TestMultiNode/serial/ValidateNameConflict 20.1
241 TestPreload 10.15
243 TestScheduledStopUnix 10.04
244 TestSkaffold 12.62
247 TestRunningBinaryUpgrade 592.76
249 TestKubernetesUpgrade 18.96
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.75
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.24
265 TestStoppedBinaryUpgrade/Upgrade 571.93
267 TestPause/serial/Start 10.05
277 TestNoKubernetes/serial/StartWithK8s 10.06
278 TestNoKubernetes/serial/StartWithStopK8s 5.26
279 TestNoKubernetes/serial/Start 5.29
283 TestNoKubernetes/serial/StartNoArgs 5.29
285 TestNetworkPlugins/group/auto/Start 9.8
286 TestNetworkPlugins/group/kindnet/Start 9.99
287 TestNetworkPlugins/group/flannel/Start 9.93
288 TestNetworkPlugins/group/enable-default-cni/Start 9.8
289 TestNetworkPlugins/group/bridge/Start 9.94
290 TestNetworkPlugins/group/kubenet/Start 9.8
291 TestNetworkPlugins/group/custom-flannel/Start 9.79
292 TestNetworkPlugins/group/calico/Start 9.77
293 TestNetworkPlugins/group/false/Start 9.79
295 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/old-k8s-version/serial/Pause 0.1
307 TestStartStop/group/no-preload/serial/FirstStart 9.98
308 TestStartStop/group/no-preload/serial/DeployApp 0.09
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/no-preload/serial/SecondStart 5.26
314 TestStartStop/group/embed-certs/serial/FirstStart 9.86
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
318 TestStartStop/group/no-preload/serial/Pause 0.1
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.89
321 TestStartStop/group/embed-certs/serial/DeployApp 0.09
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
325 TestStartStop/group/embed-certs/serial/SecondStart 7.57
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/embed-certs/serial/Pause 0.1
336 TestStartStop/group/newest-cni/serial/FirstStart 9.97
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
345 TestStartStop/group/newest-cni/serial/SecondStart 5.25
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (17.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-830000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-830000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.450143375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7a9783a6-7f89-4cfb-a748-a971d508f5ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-830000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd061472-5488-40af-8572-860dd3dfaea7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"a1195906-ec51-4689-8297-cccc62507a64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig"}}
	{"specversion":"1.0","id":"56c429f9-052b-4bc2-9d41-8f5777beb5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1c145212-7edd-4b83-868b-ff6ab8c1b4a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1df2210d-911e-4a96-ba32-f9370d6b6ea5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube"}}
	{"specversion":"1.0","id":"0dfad261-1695-4baa-b322-f4165e4f648c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"c4c4ba8d-a116-40c1-8ce2-010a0c4a5b19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9743afa-9626-4a35-b66b-34a8547d36a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"89a483a8-3a75-410d-92bd-e75be807cb91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"16a0d468-e4c3-46f5-9d66-0cc4083a1d70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-830000\" primary control-plane node in \"download-only-830000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca3ecb2a-114f-459e-b3ae-c258f77fb57f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"025fc3a0-67a8-4c43-bc52-0997931fcd09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20] Decompressors:map[bz2:0x1400012c578 gz:0x1400012c600 tar:0x1400012c5b0 tar.bz2:0x1400012c5c0 tar.gz:0x1400012c5d0 tar.xz:0x1400012c5e0 tar.zst:0x1400012c5f0 tbz2:0x1400012c5c0 tgz:0x140
0012c5d0 txz:0x1400012c5e0 tzst:0x1400012c5f0 xz:0x1400012c608 zip:0x1400012c610 zst:0x1400012c620] Getters:map[file:0x1400054cda0 http:0x14000814460 https:0x140008144b0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"5f267fe2-71e5-4075-8466-782432e5f48c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:04:12.808630    1457 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:04:12.808760    1457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:12.808763    1457 out.go:304] Setting ErrFile to fd 2...
	I0806 00:04:12.808770    1457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:12.808891    1457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	W0806 00:04:12.808989    1457 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19370-965/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19370-965/.minikube/config/config.json: no such file or directory
	I0806 00:04:12.810241    1457 out.go:298] Setting JSON to true
	I0806 00:04:12.827335    1457 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":220,"bootTime":1722927632,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:04:12.827403    1457 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:04:12.833139    1457 out.go:97] [download-only-830000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:04:12.833286    1457 notify.go:220] Checking for updates...
	W0806 00:04:12.833313    1457 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball: no such file or directory
	I0806 00:04:12.836157    1457 out.go:169] MINIKUBE_LOCATION=19370
	I0806 00:04:12.842097    1457 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:04:12.847172    1457 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:04:12.848548    1457 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:04:12.851123    1457 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	W0806 00:04:12.857153    1457 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 00:04:12.857399    1457 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:04:12.861119    1457 out.go:97] Using the qemu2 driver based on user configuration
	I0806 00:04:12.861138    1457 start.go:297] selected driver: qemu2
	I0806 00:04:12.861152    1457 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:04:12.861209    1457 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:04:12.865146    1457 out.go:169] Automatically selected the socket_vmnet network
	I0806 00:04:12.871967    1457 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0806 00:04:12.872052    1457 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:04:12.872077    1457 cni.go:84] Creating CNI manager for ""
	I0806 00:04:12.872092    1457 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 00:04:12.872167    1457 start.go:340] cluster config:
	{Name:download-only-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:04:12.877540    1457 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:12.882136    1457 out.go:97] Downloading VM boot image ...
	I0806 00:04:12.882156    1457 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0806 00:04:20.151361    1457 out.go:97] Starting "download-only-830000" primary control-plane node in "download-only-830000" cluster
	I0806 00:04:20.151399    1457 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:04:20.206348    1457 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 00:04:20.206370    1457 cache.go:56] Caching tarball of preloaded images
	I0806 00:04:20.206526    1457 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:04:20.210746    1457 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0806 00:04:20.210768    1457 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:20.287368    1457 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 00:04:29.138093    1457 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:29.138280    1457 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:29.833264    1457 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0806 00:04:29.833465    1457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-830000/config.json ...
	I0806 00:04:29.833486    1457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-830000/config.json: {Name:mk241b18476bf4c8f435537a1572cd00aba13ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:04:29.833732    1457 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:04:29.833933    1457 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0806 00:04:30.191760    1457 out.go:169] 
	W0806 00:04:30.196814    1457 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20] Decompressors:map[bz2:0x1400012c578 gz:0x1400012c600 tar:0x1400012c5b0 tar.bz2:0x1400012c5c0 tar.gz:0x1400012c5d0 tar.xz:0x1400012c5e0 tar.zst:0x1400012c5f0 tbz2:0x1400012c5c0 tgz:0x1400012c5d0 txz:0x1400012c5e0 tzst:0x1400012c5f0 xz:0x1400012c608 zip:0x1400012c610 zst:0x1400012c620] Getters:map[file:0x1400054cda0 http:0x14000814460 https:0x140008144b0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0806 00:04:30.196839    1457 out_reason.go:110] 
	W0806 00:04:30.204766    1457 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:04:30.207531    1457 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-830000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (17.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-244000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-244000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.849432833s)

                                                
                                                
-- stdout --
	* [offline-docker-244000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-244000" primary control-plane node in "offline-docker-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:51:22.767534    4075 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:51:22.767681    4075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:22.767685    4075 out.go:304] Setting ErrFile to fd 2...
	I0806 00:51:22.767687    4075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:22.767827    4075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:51:22.768939    4075 out.go:298] Setting JSON to false
	I0806 00:51:22.786633    4075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3050,"bootTime":1722927632,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:51:22.786699    4075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:51:22.792420    4075 out.go:177] * [offline-docker-244000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:51:22.800310    4075 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:51:22.800322    4075 notify.go:220] Checking for updates...
	I0806 00:51:22.808257    4075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:51:22.811336    4075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:51:22.814260    4075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:51:22.817222    4075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:51:22.820285    4075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:51:22.823629    4075 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:51:22.823693    4075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:51:22.827252    4075 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 00:51:22.834297    4075 start.go:297] selected driver: qemu2
	I0806 00:51:22.834307    4075 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:51:22.834316    4075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:51:22.836238    4075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:51:22.839242    4075 out.go:177] * Automatically selected the socket_vmnet network
	I0806 00:51:22.842313    4075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:51:22.842334    4075 cni.go:84] Creating CNI manager for ""
	I0806 00:51:22.842340    4075 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:51:22.842345    4075 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:51:22.842384    4075 start.go:340] cluster config:
	{Name:offline-docker-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:51:22.845977    4075 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:51:22.851272    4075 out.go:177] * Starting "offline-docker-244000" primary control-plane node in "offline-docker-244000" cluster
	I0806 00:51:22.855259    4075 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:51:22.855285    4075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:51:22.855295    4075 cache.go:56] Caching tarball of preloaded images
	I0806 00:51:22.855366    4075 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:51:22.855371    4075 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:51:22.855444    4075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/offline-docker-244000/config.json ...
	I0806 00:51:22.855454    4075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/offline-docker-244000/config.json: {Name:mk7fcb85307ad1335672f807e9a53345666bc295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:51:22.855773    4075 start.go:360] acquireMachinesLock for offline-docker-244000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:22.855809    4075 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "offline-docker-244000"
	I0806 00:51:22.855819    4075 start.go:93] Provisioning new machine with config: &{Name:offline-docker-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:22.855853    4075 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:22.860167    4075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:22.875804    4075 start.go:159] libmachine.API.Create for "offline-docker-244000" (driver="qemu2")
	I0806 00:51:22.875838    4075 client.go:168] LocalClient.Create starting
	I0806 00:51:22.875912    4075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:22.875944    4075 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:22.875957    4075 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:22.875997    4075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:22.876020    4075 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:22.876028    4075 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:22.876435    4075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:23.027140    4075 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:23.193210    4075 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:23.193223    4075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:23.193683    4075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2
	I0806 00:51:23.203361    4075 main.go:141] libmachine: STDOUT: 
	I0806 00:51:23.203380    4075 main.go:141] libmachine: STDERR: 
	I0806 00:51:23.203430    4075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2 +20000M
	I0806 00:51:23.212345    4075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:23.212376    4075 main.go:141] libmachine: STDERR: 
	I0806 00:51:23.212391    4075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2
	I0806 00:51:23.212402    4075 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:23.212413    4075 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:23.212448    4075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a6:40:07:3c:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2
	I0806 00:51:23.214518    4075 main.go:141] libmachine: STDOUT: 
	I0806 00:51:23.214537    4075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:23.214557    4075 client.go:171] duration metric: took 338.717917ms to LocalClient.Create
	I0806 00:51:25.216645    4075 start.go:128] duration metric: took 2.360798375s to createHost
	I0806 00:51:25.216670    4075 start.go:83] releasing machines lock for "offline-docker-244000", held for 2.360871334s
	W0806 00:51:25.216699    4075 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:25.225856    4075 out.go:177] * Deleting "offline-docker-244000" in qemu2 ...
	W0806 00:51:25.237361    4075 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:25.237376    4075 start.go:729] Will try again in 5 seconds ...
	I0806 00:51:30.239518    4075 start.go:360] acquireMachinesLock for offline-docker-244000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:30.240045    4075 start.go:364] duration metric: took 385.833µs to acquireMachinesLock for "offline-docker-244000"
	I0806 00:51:30.240179    4075 start.go:93] Provisioning new machine with config: &{Name:offline-docker-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:30.240511    4075 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:30.249528    4075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:30.298877    4075 start.go:159] libmachine.API.Create for "offline-docker-244000" (driver="qemu2")
	I0806 00:51:30.298929    4075 client.go:168] LocalClient.Create starting
	I0806 00:51:30.299036    4075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:30.299105    4075 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:30.299127    4075 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:30.299191    4075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:30.299235    4075 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:30.299247    4075 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:30.299851    4075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:30.460121    4075 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:30.520072    4075 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:30.520079    4075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:30.520261    4075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2
	I0806 00:51:30.529288    4075 main.go:141] libmachine: STDOUT: 
	I0806 00:51:30.529307    4075 main.go:141] libmachine: STDERR: 
	I0806 00:51:30.529363    4075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2 +20000M
	I0806 00:51:30.537268    4075 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:30.537284    4075 main.go:141] libmachine: STDERR: 
	I0806 00:51:30.537299    4075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2
	I0806 00:51:30.537307    4075 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:30.537319    4075 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:30.537347    4075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:16:a4:e9:85:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/offline-docker-244000/disk.qcow2
	I0806 00:51:30.538838    4075 main.go:141] libmachine: STDOUT: 
	I0806 00:51:30.538857    4075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:30.538871    4075 client.go:171] duration metric: took 239.938542ms to LocalClient.Create
	I0806 00:51:32.541041    4075 start.go:128] duration metric: took 2.30050975s to createHost
	I0806 00:51:32.541192    4075 start.go:83] releasing machines lock for "offline-docker-244000", held for 2.301136375s
	W0806 00:51:32.541546    4075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:32.555229    4075 out.go:177] 
	W0806 00:51:32.559207    4075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:51:32.559572    4075 out.go:239] * 
	* 
	W0806 00:51:32.562223    4075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:51:32.575149    4075 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-244000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-06 00:51:32.589391 -0700 PDT m=+2839.890848251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-244000 -n offline-docker-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-244000 -n offline-docker-244000: exit status 7 (64.484458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-244000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-244000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-244000
--- FAIL: TestOffline (10.00s)

                                                
                                    
x
+
TestCertOptions (10.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-780000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-780000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.166939041s)

                                                
                                                
-- stdout --
	* [cert-options-780000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-780000" primary control-plane node in "cert-options-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-780000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-780000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-780000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.546958ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-780000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-780000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-780000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-780000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-780000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.749ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-780000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-780000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-780000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-06 00:52:04.144311 -0700 PDT m=+2871.445971584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-780000 -n cert-options-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-780000 -n cert-options-780000: exit status 7 (29.969583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-780000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-780000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-780000
--- FAIL: TestCertOptions (10.43s)

                                                
                                    
x
+
TestCertExpiration (195.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.079099375s)

                                                
                                                
-- stdout --
	* [cert-expiration-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-730000" primary control-plane node in "cert-expiration-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.230573583s)

                                                
                                                
-- stdout --
	* [cert-expiration-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-730000" primary control-plane node in "cert-expiration-730000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-730000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-730000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-730000" primary control-plane node in "cert-expiration-730000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-730000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-730000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-06 00:55:03.980954 -0700 PDT m=+3051.283777876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-730000 -n cert-expiration-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-730000 -n cert-expiration-730000: exit status 7 (62.4275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-730000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-730000
--- FAIL: TestCertExpiration (195.45s)

                                                
                                    
x
+
TestDockerFlags (10.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-657000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-657000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.020403167s)

                                                
                                                
-- stdout --
	* [docker-flags-657000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-657000" primary control-plane node in "docker-flags-657000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-657000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:51:43.603116    4265 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:51:43.603271    4265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:43.603277    4265 out.go:304] Setting ErrFile to fd 2...
	I0806 00:51:43.603279    4265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:43.603418    4265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:51:43.604495    4265 out.go:298] Setting JSON to false
	I0806 00:51:43.620449    4265 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3071,"bootTime":1722927632,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:51:43.620521    4265 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:51:43.626961    4265 out.go:177] * [docker-flags-657000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:51:43.633726    4265 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:51:43.633741    4265 notify.go:220] Checking for updates...
	I0806 00:51:43.641813    4265 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:51:43.643190    4265 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:51:43.645822    4265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:51:43.648788    4265 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:51:43.651871    4265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:51:43.655193    4265 config.go:182] Loaded profile config "force-systemd-flag-958000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:51:43.655258    4265 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:51:43.655306    4265 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:51:43.659800    4265 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 00:51:43.666742    4265 start.go:297] selected driver: qemu2
	I0806 00:51:43.666748    4265 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:51:43.666754    4265 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:51:43.669152    4265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:51:43.671850    4265 out.go:177] * Automatically selected the socket_vmnet network
	I0806 00:51:43.674948    4265 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0806 00:51:43.674997    4265 cni.go:84] Creating CNI manager for ""
	I0806 00:51:43.675005    4265 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:51:43.675009    4265 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:51:43.675042    4265 start.go:340] cluster config:
	{Name:docker-flags-657000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:51:43.678863    4265 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:51:43.685843    4265 out.go:177] * Starting "docker-flags-657000" primary control-plane node in "docker-flags-657000" cluster
	I0806 00:51:43.688722    4265 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:51:43.688739    4265 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:51:43.688749    4265 cache.go:56] Caching tarball of preloaded images
	I0806 00:51:43.688817    4265 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:51:43.688824    4265 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:51:43.688887    4265 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/docker-flags-657000/config.json ...
	I0806 00:51:43.688899    4265 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/docker-flags-657000/config.json: {Name:mkf91d1d159cb6cc280918a6799ba261ff7ec104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:51:43.689127    4265 start.go:360] acquireMachinesLock for docker-flags-657000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:43.689160    4265 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "docker-flags-657000"
	I0806 00:51:43.689172    4265 start.go:93] Provisioning new machine with config: &{Name:docker-flags-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:43.689203    4265 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:43.696789    4265 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:43.714384    4265 start.go:159] libmachine.API.Create for "docker-flags-657000" (driver="qemu2")
	I0806 00:51:43.714412    4265 client.go:168] LocalClient.Create starting
	I0806 00:51:43.714479    4265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:43.714510    4265 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:43.714521    4265 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:43.714560    4265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:43.714584    4265 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:43.714591    4265 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:43.714968    4265 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:43.867766    4265 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:43.911500    4265 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:43.911505    4265 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:43.911677    4265 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2
	I0806 00:51:43.920691    4265 main.go:141] libmachine: STDOUT: 
	I0806 00:51:43.920709    4265 main.go:141] libmachine: STDERR: 
	I0806 00:51:43.920757    4265 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2 +20000M
	I0806 00:51:43.928510    4265 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:43.928522    4265 main.go:141] libmachine: STDERR: 
	I0806 00:51:43.928539    4265 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2
	I0806 00:51:43.928544    4265 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:43.928554    4265 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:43.928579    4265 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:03:72:ba:be:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2
	I0806 00:51:43.930144    4265 main.go:141] libmachine: STDOUT: 
	I0806 00:51:43.930156    4265 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:43.930174    4265 client.go:171] duration metric: took 215.760083ms to LocalClient.Create
	I0806 00:51:45.932333    4265 start.go:128] duration metric: took 2.243122708s to createHost
	I0806 00:51:45.932401    4265 start.go:83] releasing machines lock for "docker-flags-657000", held for 2.243246417s
	W0806 00:51:45.932453    4265 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:45.942495    4265 out.go:177] * Deleting "docker-flags-657000" in qemu2 ...
	W0806 00:51:45.972470    4265 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:45.972496    4265 start.go:729] Will try again in 5 seconds ...
	I0806 00:51:50.974682    4265 start.go:360] acquireMachinesLock for docker-flags-657000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:51.171783    4265 start.go:364] duration metric: took 196.9415ms to acquireMachinesLock for "docker-flags-657000"
	I0806 00:51:51.171966    4265 start.go:93] Provisioning new machine with config: &{Name:docker-flags-657000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-657000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:51.172273    4265 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:51.181830    4265 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:51.231980    4265 start.go:159] libmachine.API.Create for "docker-flags-657000" (driver="qemu2")
	I0806 00:51:51.232035    4265 client.go:168] LocalClient.Create starting
	I0806 00:51:51.232198    4265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:51.232270    4265 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:51.232287    4265 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:51.232356    4265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:51.232402    4265 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:51.232418    4265 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:51.232927    4265 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:51.398457    4265 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:51.521686    4265 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:51.521692    4265 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:51.521919    4265 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2
	I0806 00:51:51.531056    4265 main.go:141] libmachine: STDOUT: 
	I0806 00:51:51.531077    4265 main.go:141] libmachine: STDERR: 
	I0806 00:51:51.531123    4265 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2 +20000M
	I0806 00:51:51.538993    4265 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:51.539007    4265 main.go:141] libmachine: STDERR: 
	I0806 00:51:51.539017    4265 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2
	I0806 00:51:51.539021    4265 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:51.539030    4265 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:51.539054    4265 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:67:34:1c:26:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/docker-flags-657000/disk.qcow2
	I0806 00:51:51.540660    4265 main.go:141] libmachine: STDOUT: 
	I0806 00:51:51.540676    4265 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:51.540689    4265 client.go:171] duration metric: took 308.64825ms to LocalClient.Create
	I0806 00:51:53.542848    4265 start.go:128] duration metric: took 2.370559958s to createHost
	I0806 00:51:53.542962    4265 start.go:83] releasing machines lock for "docker-flags-657000", held for 2.371107417s
	W0806 00:51:53.543286    4265 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:53.555847    4265 out.go:177] 
	W0806 00:51:53.565938    4265 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:51:53.566083    4265 out.go:239] * 
	* 
	W0806 00:51:53.568555    4265 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:51:53.581814    4265 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-657000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-657000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-657000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.416167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-657000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-657000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-657000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-657000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-657000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-657000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-657000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-657000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-657000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.654583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-657000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-657000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-657000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-657000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-657000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-657000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-06 00:51:53.721774 -0700 PDT m=+2861.023367251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-657000 -n docker-flags-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-657000 -n docker-flags-657000: exit status 7 (27.958625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-657000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-657000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-657000
--- FAIL: TestDockerFlags (10.25s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.884640875s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-958000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-958000" primary control-plane node in "force-systemd-flag-958000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-958000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:51:38.624658    4244 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:51:38.624804    4244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:38.624807    4244 out.go:304] Setting ErrFile to fd 2...
	I0806 00:51:38.624810    4244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:38.624935    4244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:51:38.626016    4244 out.go:298] Setting JSON to false
	I0806 00:51:38.641901    4244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3066,"bootTime":1722927632,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:51:38.641990    4244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:51:38.648841    4244 out.go:177] * [force-systemd-flag-958000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:51:38.655951    4244 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:51:38.656009    4244 notify.go:220] Checking for updates...
	I0806 00:51:38.661241    4244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:51:38.663934    4244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:51:38.666996    4244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:51:38.669980    4244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:51:38.672946    4244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:51:38.676261    4244 config.go:182] Loaded profile config "force-systemd-env-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:51:38.676348    4244 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:51:38.676395    4244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:51:38.680906    4244 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 00:51:38.687975    4244 start.go:297] selected driver: qemu2
	I0806 00:51:38.687982    4244 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:51:38.687989    4244 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:51:38.690272    4244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:51:38.693913    4244 out.go:177] * Automatically selected the socket_vmnet network
	I0806 00:51:38.696995    4244 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:51:38.697015    4244 cni.go:84] Creating CNI manager for ""
	I0806 00:51:38.697031    4244 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:51:38.697035    4244 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:51:38.697063    4244 start.go:340] cluster config:
	{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:51:38.700596    4244 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:51:38.707773    4244 out.go:177] * Starting "force-systemd-flag-958000" primary control-plane node in "force-systemd-flag-958000" cluster
	I0806 00:51:38.711932    4244 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:51:38.711954    4244 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:51:38.711963    4244 cache.go:56] Caching tarball of preloaded images
	I0806 00:51:38.712047    4244 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:51:38.712053    4244 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:51:38.712116    4244 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/force-systemd-flag-958000/config.json ...
	I0806 00:51:38.712127    4244 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/force-systemd-flag-958000/config.json: {Name:mkffc867ed1b60e6a9f5721681f797bdfbae09f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:51:38.712373    4244 start.go:360] acquireMachinesLock for force-systemd-flag-958000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:38.712410    4244 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "force-systemd-flag-958000"
	I0806 00:51:38.712422    4244 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:38.712456    4244 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:38.719018    4244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:38.736862    4244 start.go:159] libmachine.API.Create for "force-systemd-flag-958000" (driver="qemu2")
	I0806 00:51:38.736888    4244 client.go:168] LocalClient.Create starting
	I0806 00:51:38.736954    4244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:38.736985    4244 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:38.737000    4244 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:38.737035    4244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:38.737063    4244 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:38.737072    4244 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:38.737450    4244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:38.887716    4244 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:39.015762    4244 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:39.015770    4244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:39.015945    4244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I0806 00:51:39.025215    4244 main.go:141] libmachine: STDOUT: 
	I0806 00:51:39.025237    4244 main.go:141] libmachine: STDERR: 
	I0806 00:51:39.025291    4244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2 +20000M
	I0806 00:51:39.033095    4244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:39.033107    4244 main.go:141] libmachine: STDERR: 
	I0806 00:51:39.033120    4244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I0806 00:51:39.033124    4244 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:39.033139    4244 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:39.033165    4244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:ec:f9:47:bf:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I0806 00:51:39.034783    4244 main.go:141] libmachine: STDOUT: 
	I0806 00:51:39.034797    4244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:39.034812    4244 client.go:171] duration metric: took 297.923417ms to LocalClient.Create
	I0806 00:51:41.036996    4244 start.go:128] duration metric: took 2.324538917s to createHost
	I0806 00:51:41.037057    4244 start.go:83] releasing machines lock for "force-systemd-flag-958000", held for 2.324652333s
	W0806 00:51:41.037192    4244 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:41.056042    4244 out.go:177] * Deleting "force-systemd-flag-958000" in qemu2 ...
	W0806 00:51:41.077611    4244 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:41.077630    4244 start.go:729] Will try again in 5 seconds ...
	I0806 00:51:46.079771    4244 start.go:360] acquireMachinesLock for force-systemd-flag-958000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:46.080197    4244 start.go:364] duration metric: took 300.917µs to acquireMachinesLock for "force-systemd-flag-958000"
	I0806 00:51:46.080305    4244 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:46.080544    4244 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:46.089970    4244 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:46.139754    4244 start.go:159] libmachine.API.Create for "force-systemd-flag-958000" (driver="qemu2")
	I0806 00:51:46.139805    4244 client.go:168] LocalClient.Create starting
	I0806 00:51:46.139937    4244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:46.139997    4244 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:46.140015    4244 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:46.140081    4244 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:46.140127    4244 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:46.140138    4244 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:46.141331    4244 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:46.309180    4244 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:46.418921    4244 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:46.418926    4244 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:46.419105    4244 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I0806 00:51:46.428373    4244 main.go:141] libmachine: STDOUT: 
	I0806 00:51:46.428390    4244 main.go:141] libmachine: STDERR: 
	I0806 00:51:46.428444    4244 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2 +20000M
	I0806 00:51:46.436173    4244 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:46.436192    4244 main.go:141] libmachine: STDERR: 
	I0806 00:51:46.436204    4244 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I0806 00:51:46.436209    4244 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:46.436216    4244 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:46.436247    4244 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4f:1e:01:63:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I0806 00:51:46.437850    4244 main.go:141] libmachine: STDOUT: 
	I0806 00:51:46.437864    4244 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:46.437877    4244 client.go:171] duration metric: took 298.067792ms to LocalClient.Create
	I0806 00:51:48.440041    4244 start.go:128] duration metric: took 2.359482292s to createHost
	I0806 00:51:48.440091    4244 start.go:83] releasing machines lock for "force-systemd-flag-958000", held for 2.359885833s
	W0806 00:51:48.440536    4244 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:48.454188    4244 out.go:177] 
	W0806 00:51:48.458315    4244 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:51:48.458351    4244 out.go:239] * 
	* 
	W0806 00:51:48.460795    4244 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:51:48.469156    4244 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.823625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-958000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-958000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-958000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-06 00:51:48.5608 -0700 PDT m=+2855.862360126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-958000 -n force-systemd-flag-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-958000 -n force-systemd-flag-958000: exit status 7 (33.346208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-958000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-958000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-958000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (10.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-873000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-873000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.654200625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-873000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-873000" primary control-plane node in "force-systemd-env-873000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-873000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:51:32.761273    4210 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:51:32.761424    4210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:32.761428    4210 out.go:304] Setting ErrFile to fd 2...
	I0806 00:51:32.761430    4210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:51:32.761560    4210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:51:32.762549    4210 out.go:298] Setting JSON to false
	I0806 00:51:32.778925    4210 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3060,"bootTime":1722927632,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:51:32.779001    4210 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:51:32.785091    4210 out.go:177] * [force-systemd-env-873000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:51:32.791707    4210 notify.go:220] Checking for updates...
	I0806 00:51:32.797101    4210 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:51:32.805050    4210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:51:32.813048    4210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:51:32.820914    4210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:51:32.829033    4210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:51:32.836105    4210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0806 00:51:32.840425    4210 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:51:32.840475    4210 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:51:32.844087    4210 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 00:51:32.851117    4210 start.go:297] selected driver: qemu2
	I0806 00:51:32.851122    4210 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:51:32.851128    4210 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:51:32.853368    4210 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:51:32.857081    4210 out.go:177] * Automatically selected the socket_vmnet network
	I0806 00:51:32.860186    4210 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:51:32.860205    4210 cni.go:84] Creating CNI manager for ""
	I0806 00:51:32.860220    4210 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:51:32.860236    4210 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:51:32.860273    4210 start.go:340] cluster config:
	{Name:force-systemd-env-873000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:51:32.863959    4210 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:51:32.870129    4210 out.go:177] * Starting "force-systemd-env-873000" primary control-plane node in "force-systemd-env-873000" cluster
	I0806 00:51:32.874119    4210 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:51:32.874137    4210 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:51:32.874146    4210 cache.go:56] Caching tarball of preloaded images
	I0806 00:51:32.874199    4210 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:51:32.874204    4210 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:51:32.874266    4210 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/force-systemd-env-873000/config.json ...
	I0806 00:51:32.874284    4210 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/force-systemd-env-873000/config.json: {Name:mkd7acc52332ab08dbf155293f4a325ddbfdf6d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:51:32.874505    4210 start.go:360] acquireMachinesLock for force-systemd-env-873000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:32.874539    4210 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "force-systemd-env-873000"
	I0806 00:51:32.874550    4210 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:32.874579    4210 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:32.883093    4210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:32.901017    4210 start.go:159] libmachine.API.Create for "force-systemd-env-873000" (driver="qemu2")
	I0806 00:51:32.901053    4210 client.go:168] LocalClient.Create starting
	I0806 00:51:32.901124    4210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:32.901155    4210 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:32.901168    4210 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:32.901211    4210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:32.901235    4210 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:32.901246    4210 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:32.901600    4210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:33.053484    4210 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:33.181524    4210 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:33.181530    4210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:33.181708    4210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2
	I0806 00:51:33.191148    4210 main.go:141] libmachine: STDOUT: 
	I0806 00:51:33.191167    4210 main.go:141] libmachine: STDERR: 
	I0806 00:51:33.191218    4210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2 +20000M
	I0806 00:51:33.199325    4210 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:33.199338    4210 main.go:141] libmachine: STDERR: 
	I0806 00:51:33.199362    4210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2
	I0806 00:51:33.199366    4210 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:33.199377    4210 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:33.199404    4210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:cf:0a:d4:b6:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2
	I0806 00:51:33.201116    4210 main.go:141] libmachine: STDOUT: 
	I0806 00:51:33.201128    4210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:33.201145    4210 client.go:171] duration metric: took 300.086583ms to LocalClient.Create
	I0806 00:51:35.203196    4210 start.go:128] duration metric: took 2.328623791s to createHost
	I0806 00:51:35.203228    4210 start.go:83] releasing machines lock for "force-systemd-env-873000", held for 2.3286995s
	W0806 00:51:35.203244    4210 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:35.211128    4210 out.go:177] * Deleting "force-systemd-env-873000" in qemu2 ...
	W0806 00:51:35.225245    4210 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:35.225256    4210 start.go:729] Will try again in 5 seconds ...
	I0806 00:51:40.227468    4210 start.go:360] acquireMachinesLock for force-systemd-env-873000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:51:41.037303    4210 start.go:364] duration metric: took 809.709625ms to acquireMachinesLock for "force-systemd-env-873000"
	I0806 00:51:41.037391    4210 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:51:41.037650    4210 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:51:41.048086    4210 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 00:51:41.098115    4210 start.go:159] libmachine.API.Create for "force-systemd-env-873000" (driver="qemu2")
	I0806 00:51:41.098158    4210 client.go:168] LocalClient.Create starting
	I0806 00:51:41.098287    4210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:51:41.098350    4210 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:41.098368    4210 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:41.098447    4210 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:51:41.098492    4210 main.go:141] libmachine: Decoding PEM data...
	I0806 00:51:41.098502    4210 main.go:141] libmachine: Parsing certificate...
	I0806 00:51:41.099175    4210 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:51:41.265127    4210 main.go:141] libmachine: Creating SSH key...
	I0806 00:51:41.319495    4210 main.go:141] libmachine: Creating Disk image...
	I0806 00:51:41.319500    4210 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:51:41.319737    4210 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2
	I0806 00:51:41.328880    4210 main.go:141] libmachine: STDOUT: 
	I0806 00:51:41.328899    4210 main.go:141] libmachine: STDERR: 
	I0806 00:51:41.328946    4210 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2 +20000M
	I0806 00:51:41.336791    4210 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:51:41.336807    4210 main.go:141] libmachine: STDERR: 
	I0806 00:51:41.336821    4210 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2
	I0806 00:51:41.336824    4210 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:51:41.336838    4210 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:51:41.336865    4210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:4d:a2:50:84:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/force-systemd-env-873000/disk.qcow2
	I0806 00:51:41.338503    4210 main.go:141] libmachine: STDOUT: 
	I0806 00:51:41.338517    4210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:51:41.338534    4210 client.go:171] duration metric: took 240.3715ms to LocalClient.Create
	I0806 00:51:43.340816    4210 start.go:128] duration metric: took 2.303113541s to createHost
	I0806 00:51:43.340889    4210 start.go:83] releasing machines lock for "force-systemd-env-873000", held for 2.303568083s
	W0806 00:51:43.341205    4210 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-873000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-873000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:51:43.355777    4210 out.go:177] 
	W0806 00:51:43.360765    4210 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:51:43.360844    4210 out.go:239] * 
	* 
	W0806 00:51:43.363468    4210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:51:43.373622    4210 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-873000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-873000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-873000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.31425ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-873000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-873000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-06 00:51:43.465824 -0700 PDT m=+2850.767350584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-873000 -n force-systemd-env-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-873000 -n force-systemd-env-873000: exit status 7 (31.942583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-873000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-873000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-873000
--- FAIL: TestForceSystemdEnv (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-804000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-804000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-q6fxv" [645fb0eb-f0e9-4b2d-89fe-49e5c01d0c81] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-q6fxv" [645fb0eb-f0e9-4b2d-89fe-49e5c01d0c81] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003789042s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31731
functional_test.go:1657: error fetching http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31731: Get "http://192.168.105.4:31731": dial tcp 192.168.105.4:31731: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-804000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-q6fxv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-804000/192.168.105.4
Start Time:       Tue, 06 Aug 2024 00:16:03 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://03a95aa7e16833e609c3e88476538c3676b8d86ad06246c23fb0924ed1ef3c1d
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 06 Aug 2024 00:16:20 -0700
Finished:     Tue, 06 Aug 2024 00:16:20 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 06 Aug 2024 00:16:05 -0700
Finished:     Tue, 06 Aug 2024 00:16:05 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rkzgb (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rkzgb:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  28s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-q6fxv to functional-804000
Normal   Pulled     11s (x3 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    11s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    11s (x3 over 27s)  kubelet            Started container echoserver-arm
Warning  BackOff    10s (x3 over 25s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-q6fxv_default(645fb0eb-f0e9-4b2d-89fe-49e5c01d0c81)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-804000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-804000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.188.173
IPs:                      10.109.188.173
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31731/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-804000 -n functional-804000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-804000 image save                                                                                    | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	|         | docker.io/kicbase/echo-server:functional-804000                                                                 |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-804000 image rm                                                                                      | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	|         | docker.io/kicbase/echo-server:functional-804000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-804000 image ls                                                                                      | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	| image   | functional-804000 image load                                                                                    | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-804000 image ls                                                                                      | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	| image   | functional-804000 image save --daemon                                                                           | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	|         | docker.io/kicbase/echo-server:functional-804000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-804000 ssh echo                                                                                      | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	|         | hello                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-804000 ssh cat                                                                                       | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT | 06 Aug 24 00:15 PDT |
	|         | /etc/hostname                                                                                                   |                   |         |         |                     |                     |
	| tunnel  | functional-804000 tunnel                                                                                        | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-804000 tunnel                                                                                        | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-804000 tunnel                                                                                        | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:15 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| service | functional-804000 service list                                                                                  | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	| service | functional-804000 service list                                                                                  | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-804000 service                                                                                       | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-804000                                                                                               | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-804000 service                                                                                       | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| addons  | functional-804000 addons list                                                                                   | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	| addons  | functional-804000 addons list                                                                                   | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-804000 service                                                                                       | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-804000 ssh findmnt                                                                                   | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| mount   | -p functional-804000                                                                                            | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3089474598/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-804000 ssh findmnt                                                                                   | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-804000 ssh findmnt                                                                                   | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-804000 ssh -- ls                                                                                     | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-804000 ssh cat                                                                                       | functional-804000 | jenkins | v1.33.1 | 06 Aug 24 00:16 PDT | 06 Aug 24 00:16 PDT |
	|         | /mount-9p/test-1722928586966948000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:15:01
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:15:01.484484    2086 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:15:01.484603    2086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:15:01.484605    2086 out.go:304] Setting ErrFile to fd 2...
	I0806 00:15:01.484607    2086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:15:01.484727    2086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:15:01.485856    2086 out.go:298] Setting JSON to false
	I0806 00:15:01.503692    2086 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":869,"bootTime":1722927632,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:15:01.503794    2086 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:15:01.508699    2086 out.go:177] * [functional-804000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:15:01.516718    2086 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:15:01.516764    2086 notify.go:220] Checking for updates...
	I0806 00:15:01.524633    2086 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:15:01.527704    2086 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:15:01.530692    2086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:15:01.533661    2086 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:15:01.536702    2086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:15:01.539966    2086 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:15:01.540017    2086 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:15:01.543656    2086 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:15:01.550731    2086 start.go:297] selected driver: qemu2
	I0806 00:15:01.550735    2086 start.go:901] validating driver "qemu2" against &{Name:functional-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:15:01.550778    2086 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:15:01.552945    2086 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:15:01.552973    2086 cni.go:84] Creating CNI manager for ""
	I0806 00:15:01.552981    2086 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:15:01.553039    2086 start.go:340] cluster config:
	{Name:functional-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-804000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:15:01.556303    2086 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:15:01.564683    2086 out.go:177] * Starting "functional-804000" primary control-plane node in "functional-804000" cluster
	I0806 00:15:01.568678    2086 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:15:01.568688    2086 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:15:01.568696    2086 cache.go:56] Caching tarball of preloaded images
	I0806 00:15:01.568745    2086 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:15:01.568748    2086 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:15:01.568789    2086 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/config.json ...
	I0806 00:15:01.569319    2086 start.go:360] acquireMachinesLock for functional-804000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:15:01.569353    2086 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "functional-804000"
	I0806 00:15:01.569359    2086 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:15:01.569366    2086 fix.go:54] fixHost starting: 
	I0806 00:15:01.569923    2086 fix.go:112] recreateIfNeeded on functional-804000: state=Running err=<nil>
	W0806 00:15:01.569929    2086 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:15:01.576678    2086 out.go:177] * Updating the running qemu2 "functional-804000" VM ...
	I0806 00:15:01.579688    2086 machine.go:94] provisionDockerMachine start ...
	I0806 00:15:01.579723    2086 main.go:141] libmachine: Using SSH client type: native
	I0806 00:15:01.579821    2086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d6a10] 0x1051d9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0806 00:15:01.579823    2086 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:15:01.621017    2086 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-804000
	
	I0806 00:15:01.621029    2086 buildroot.go:166] provisioning hostname "functional-804000"
	I0806 00:15:01.621071    2086 main.go:141] libmachine: Using SSH client type: native
	I0806 00:15:01.621178    2086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d6a10] 0x1051d9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0806 00:15:01.621181    2086 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-804000 && echo "functional-804000" | sudo tee /etc/hostname
	I0806 00:15:01.668538    2086 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-804000
	
	I0806 00:15:01.668592    2086 main.go:141] libmachine: Using SSH client type: native
	I0806 00:15:01.668696    2086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d6a10] 0x1051d9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0806 00:15:01.668702    2086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-804000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-804000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-804000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:15:01.711483    2086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:15:01.711492    2086 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-965/.minikube}
	I0806 00:15:01.711504    2086 buildroot.go:174] setting up certificates
	I0806 00:15:01.711507    2086 provision.go:84] configureAuth start
	I0806 00:15:01.711512    2086 provision.go:143] copyHostCerts
	I0806 00:15:01.711577    2086 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem, removing ...
	I0806 00:15:01.711581    2086 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem
	I0806 00:15:01.711693    2086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem (1082 bytes)
	I0806 00:15:01.711878    2086 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem, removing ...
	I0806 00:15:01.711880    2086 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem
	I0806 00:15:01.711937    2086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem (1123 bytes)
	I0806 00:15:01.712045    2086 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem, removing ...
	I0806 00:15:01.712047    2086 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem
	I0806 00:15:01.712095    2086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem (1675 bytes)
	I0806 00:15:01.712174    2086 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem org=jenkins.functional-804000 san=[127.0.0.1 192.168.105.4 functional-804000 localhost minikube]
	I0806 00:15:01.801666    2086 provision.go:177] copyRemoteCerts
	I0806 00:15:01.801695    2086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:15:01.801700    2086 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
	I0806 00:15:01.828119    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:15:01.837034    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0806 00:15:01.845405    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 00:15:01.853944    2086 provision.go:87] duration metric: took 142.434041ms to configureAuth
	I0806 00:15:01.853950    2086 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:15:01.854071    2086 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:15:01.854104    2086 main.go:141] libmachine: Using SSH client type: native
	I0806 00:15:01.854198    2086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d6a10] 0x1051d9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0806 00:15:01.854201    2086 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:15:01.896439    2086 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:15:01.896447    2086 buildroot.go:70] root file system type: tmpfs
	I0806 00:15:01.896510    2086 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:15:01.896573    2086 main.go:141] libmachine: Using SSH client type: native
	I0806 00:15:01.896696    2086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d6a10] 0x1051d9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0806 00:15:01.896727    2086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:15:01.942935    2086 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:15:01.942986    2086 main.go:141] libmachine: Using SSH client type: native
	I0806 00:15:01.943103    2086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d6a10] 0x1051d9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0806 00:15:01.943109    2086 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:15:01.987314    2086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:15:01.987320    2086 machine.go:97] duration metric: took 407.631958ms to provisionDockerMachine
	I0806 00:15:01.987324    2086 start.go:293] postStartSetup for "functional-804000" (driver="qemu2")
	I0806 00:15:01.987329    2086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:15:01.987371    2086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:15:01.987378    2086 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
	I0806 00:15:02.011366    2086 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:15:02.012830    2086 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:15:02.012835    2086 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/addons for local assets ...
	I0806 00:15:02.012918    2086 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/files for local assets ...
	I0806 00:15:02.013040    2086 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem -> 14552.pem in /etc/ssl/certs
	I0806 00:15:02.013152    2086 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/test/nested/copy/1455/hosts -> hosts in /etc/test/nested/copy/1455
	I0806 00:15:02.013185    2086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1455
	I0806 00:15:02.016539    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:15:02.024961    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/test/nested/copy/1455/hosts --> /etc/test/nested/copy/1455/hosts (40 bytes)
	I0806 00:15:02.033821    2086 start.go:296] duration metric: took 46.491875ms for postStartSetup
	I0806 00:15:02.033834    2086 fix.go:56] duration metric: took 464.473583ms for fixHost
	I0806 00:15:02.033873    2086 main.go:141] libmachine: Using SSH client type: native
	I0806 00:15:02.033978    2086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051d6a10] 0x1051d9270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0806 00:15:02.033981    2086 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:15:02.075709    2086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722928502.171557867
	
	I0806 00:15:02.075713    2086 fix.go:216] guest clock: 1722928502.171557867
	I0806 00:15:02.075716    2086 fix.go:229] Guest: 2024-08-06 00:15:02.171557867 -0700 PDT Remote: 2024-08-06 00:15:02.033835 -0700 PDT m=+0.567578501 (delta=137.722867ms)
	I0806 00:15:02.075729    2086 fix.go:200] guest clock delta is within tolerance: 137.722867ms
	I0806 00:15:02.075730    2086 start.go:83] releasing machines lock for "functional-804000", held for 506.378167ms
	I0806 00:15:02.076002    2086 ssh_runner.go:195] Run: cat /version.json
	I0806 00:15:02.076006    2086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:15:02.076009    2086 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
	I0806 00:15:02.076024    2086 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
	I0806 00:15:02.145338    2086 ssh_runner.go:195] Run: systemctl --version
	I0806 00:15:02.147318    2086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:15:02.149155    2086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:15:02.149180    2086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:15:02.152413    2086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 00:15:02.152422    2086 start.go:495] detecting cgroup driver to use...
	I0806 00:15:02.152493    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:15:02.158832    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:15:02.162725    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:15:02.166551    2086 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:15:02.166572    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:15:02.170563    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:15:02.174473    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:15:02.178460    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:15:02.182334    2086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:15:02.185963    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:15:02.189863    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:15:02.194180    2086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:15:02.198316    2086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:15:02.201681    2086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:15:02.205427    2086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:15:02.319313    2086 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:15:02.329239    2086 start.go:495] detecting cgroup driver to use...
	I0806 00:15:02.329298    2086 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:15:02.335576    2086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:15:02.341303    2086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:15:02.348872    2086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:15:02.354491    2086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:15:02.360184    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:15:02.366349    2086 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:15:02.367998    2086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:15:02.371033    2086 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:15:02.377072    2086 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:15:02.492143    2086 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:15:02.592974    2086 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:15:02.593020    2086 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:15:02.599813    2086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:15:02.718353    2086 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:15:15.039703    2086 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.321413083s)
	I0806 00:15:15.039769    2086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:15:15.045650    2086 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:15:15.053429    2086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:15:15.058908    2086 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:15:15.153022    2086 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:15:15.249168    2086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:15:15.335540    2086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:15:15.342720    2086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:15:15.348089    2086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:15:15.437479    2086 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:15:15.466995    2086 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:15:15.467069    2086 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:15:15.469511    2086 start.go:563] Will wait 60s for crictl version
	I0806 00:15:15.469550    2086 ssh_runner.go:195] Run: which crictl
	I0806 00:15:15.471007    2086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:15:15.483641    2086 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:15:15.483705    2086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:15:15.491240    2086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:15:15.509840    2086 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:15:15.509901    2086 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0806 00:15:15.516772    2086 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0806 00:15:15.520648    2086 kubeadm.go:883] updating cluster {Name:functional-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:functional-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:15:15.520714    2086 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:15:15.520770    2086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:15:15.533670    2086 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-804000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0806 00:15:15.533678    2086 docker.go:615] Images already preloaded, skipping extraction
	I0806 00:15:15.533729    2086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:15:15.539092    2086 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-804000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0806 00:15:15.539105    2086 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:15:15.539109    2086 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.30.3 docker true true} ...
	I0806 00:15:15.539176    2086 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-804000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:15:15.539232    2086 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:15:15.557222    2086 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0806 00:15:15.557265    2086 cni.go:84] Creating CNI manager for ""
	I0806 00:15:15.557271    2086 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:15:15.557275    2086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:15:15.557284    2086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-804000 NodeName:functional-804000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:15:15.557357    2086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-804000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:15:15.557423    2086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:15:15.560890    2086 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:15:15.560918    2086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:15:15.564190    2086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0806 00:15:15.570193    2086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:15:15.576044    2086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0806 00:15:15.582273    2086 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0806 00:15:15.583737    2086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:15:15.668016    2086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:15:15.674092    2086 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000 for IP: 192.168.105.4
	I0806 00:15:15.674095    2086 certs.go:194] generating shared ca certs ...
	I0806 00:15:15.674102    2086 certs.go:226] acquiring lock for ca certs: {Name:mkb2ca998ea1a45f9f580d4d76a58064c889c60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:15:15.674257    2086 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key
	I0806 00:15:15.674313    2086 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key
	I0806 00:15:15.674318    2086 certs.go:256] generating profile certs ...
	I0806 00:15:15.674372    2086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.key
	I0806 00:15:15.674416    2086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/apiserver.key.866e9c15
	I0806 00:15:15.674463    2086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/proxy-client.key
	I0806 00:15:15.674623    2086 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem (1338 bytes)
	W0806 00:15:15.674648    2086 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455_empty.pem, impossibly tiny 0 bytes
	I0806 00:15:15.674652    2086 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:15:15.674672    2086 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:15:15.674693    2086 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:15:15.674708    2086 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem (1675 bytes)
	I0806 00:15:15.674749    2086 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:15:15.675091    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:15:15.683950    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:15:15.692145    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:15:15.700538    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:15:15.708392    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:15:15.716331    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:15:15.724344    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:15:15.732375    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 00:15:15.740300    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:15:15.748248    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem --> /usr/share/ca-certificates/1455.pem (1338 bytes)
	I0806 00:15:15.756448    2086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1708 bytes)
	I0806 00:15:15.764358    2086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:15:15.770219    2086 ssh_runner.go:195] Run: openssl version
	I0806 00:15:15.772090    2086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:15:15.775976    2086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:15:15.777413    2086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:15:15.777436    2086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:15:15.779381    2086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:15:15.783145    2086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455.pem && ln -fs /usr/share/ca-certificates/1455.pem /etc/ssl/certs/1455.pem"
	I0806 00:15:15.787016    2086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455.pem
	I0806 00:15:15.788452    2086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:12 /usr/share/ca-certificates/1455.pem
	I0806 00:15:15.788465    2086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455.pem
	I0806 00:15:15.790517    2086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1455.pem /etc/ssl/certs/51391683.0"
	I0806 00:15:15.794046    2086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14552.pem && ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem"
	I0806 00:15:15.797635    2086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I0806 00:15:15.799176    2086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:12 /usr/share/ca-certificates/14552.pem
	I0806 00:15:15.799192    2086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I0806 00:15:15.801454    2086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:15:15.804634    2086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:15:15.806210    2086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:15:15.808213    2086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:15:15.810221    2086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:15:15.812191    2086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:15:15.814249    2086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:15:15.816365    2086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:15:15.818467    2086 kubeadm.go:392] StartCluster: {Name:functional-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:functional-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:15:15.818534    2086 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:15:15.824197    2086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:15:15.827822    2086 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 00:15:15.827825    2086 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 00:15:15.827851    2086 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 00:15:15.831313    2086 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:15:15.831587    2086 kubeconfig.go:125] found "functional-804000" server: "https://192.168.105.4:8441"
	I0806 00:15:15.832217    2086 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 00:15:15.835994    2086 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0806 00:15:15.835998    2086 kubeadm.go:1160] stopping kube-system containers ...
	I0806 00:15:15.836042    2086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:15:15.843254    2086 docker.go:483] Stopping containers: [f1071e7b81bd 40cd843ba2e8 908ddb372b03 24a4f125b117 92867b0c502e b78d2f010e24 71754b78bc8e acb9282de606 a1714c87ea19 0339ddfa1422 bed826661f4d 372310c1d238 e90ba0078c70 bcf73931f499 14a1e7625b48 26e446fb20f3 0d699638fb1c db7e3a8b629b b87291859b54 504614ed2825 a36978f6be9d a73876c427a2 e3f93089a407 24e32474410a 7ff21e1d6bcd 8b50c2b79049 0f35f4ab0225 4ec16ba4db8c 3e6df478d56d]
	I0806 00:15:15.843319    2086 ssh_runner.go:195] Run: docker stop f1071e7b81bd 40cd843ba2e8 908ddb372b03 24a4f125b117 92867b0c502e b78d2f010e24 71754b78bc8e acb9282de606 a1714c87ea19 0339ddfa1422 bed826661f4d 372310c1d238 e90ba0078c70 bcf73931f499 14a1e7625b48 26e446fb20f3 0d699638fb1c db7e3a8b629b b87291859b54 504614ed2825 a36978f6be9d a73876c427a2 e3f93089a407 24e32474410a 7ff21e1d6bcd 8b50c2b79049 0f35f4ab0225 4ec16ba4db8c 3e6df478d56d
	I0806 00:15:15.858127    2086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 00:15:15.953816    2086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:15:15.959341    2086 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug  6 07:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug  6 07:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug  6 07:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Aug  6 07:14 /etc/kubernetes/scheduler.conf
	
	I0806 00:15:15.959372    2086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0806 00:15:15.963934    2086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0806 00:15:15.968190    2086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0806 00:15:15.972147    2086 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:15:15.972174    2086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:15:15.975959    2086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0806 00:15:15.979690    2086 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:15:15.979707    2086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:15:15.983515    2086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:15:15.987423    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:15:16.007265    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:15:16.598006    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:15:16.717943    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:15:16.743548    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:15:16.774727    2086 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:15:16.774792    2086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:15:17.276861    2086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:15:17.776832    2086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:15:17.782011    2086 api_server.go:72] duration metric: took 1.007291166s to wait for apiserver process to appear ...
	I0806 00:15:17.782018    2086 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:15:17.782037    2086 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0806 00:15:20.373639    2086 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 00:15:20.373648    2086 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 00:15:20.373654    2086 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0806 00:15:20.396835    2086 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 00:15:20.396843    2086 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 00:15:20.784076    2086 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0806 00:15:20.787038    2086 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 00:15:20.787048    2086 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 00:15:21.284062    2086 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0806 00:15:21.287562    2086 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 00:15:21.287569    2086 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 00:15:21.784044    2086 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0806 00:15:21.787140    2086 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0806 00:15:21.790925    2086 api_server.go:141] control plane version: v1.30.3
	I0806 00:15:21.790931    2086 api_server.go:131] duration metric: took 4.00893675s to wait for apiserver health ...
	I0806 00:15:21.790935    2086 cni.go:84] Creating CNI manager for ""
	I0806 00:15:21.790941    2086 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:15:21.794226    2086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 00:15:21.798085    2086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 00:15:21.802574    2086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 00:15:21.808978    2086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:15:21.813773    2086 system_pods.go:59] 7 kube-system pods found
	I0806 00:15:21.813785    2086 system_pods.go:61] "coredns-7db6d8ff4d-rg2bp" [c6434c47-9a46-4655-ade7-511035169466] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 00:15:21.813788    2086 system_pods.go:61] "etcd-functional-804000" [2a322998-c470-443f-a52f-ad82bdc8a3b6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 00:15:21.813795    2086 system_pods.go:61] "kube-apiserver-functional-804000" [0d36b88d-5550-4e7e-b7b1-ac4eb075a3b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 00:15:21.813798    2086 system_pods.go:61] "kube-controller-manager-functional-804000" [da638b83-cd1c-4efd-903d-fa3eb6500da1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 00:15:21.813919    2086 system_pods.go:61] "kube-proxy-pk2pn" [2a0ddeda-0ed0-4f9f-bf73-01148e40b06b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0806 00:15:21.813930    2086 system_pods.go:61] "kube-scheduler-functional-804000" [027d98f9-ea6b-40c7-ad28-b52f6a66098c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 00:15:21.813935    2086 system_pods.go:61] "storage-provisioner" [e4cc646b-5d5c-47c7-828b-ec853a13964b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0806 00:15:21.813937    2086 system_pods.go:74] duration metric: took 4.955916ms to wait for pod list to return data ...
	I0806 00:15:21.813945    2086 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:15:21.816458    2086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:15:21.816465    2086 node_conditions.go:123] node cpu capacity is 2
	I0806 00:15:21.816470    2086 node_conditions.go:105] duration metric: took 2.523166ms to run NodePressure ...
	I0806 00:15:21.816477    2086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:15:22.038412    2086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 00:15:22.040670    2086 kubeadm.go:739] kubelet initialised
	I0806 00:15:22.040674    2086 kubeadm.go:740] duration metric: took 2.253167ms waiting for restarted kubelet to initialise ...
	I0806 00:15:22.040678    2086 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:15:22.043141    2086 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:24.048023    2086 pod_ready.go:102] pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace has status "Ready":"False"
	I0806 00:15:26.048126    2086 pod_ready.go:102] pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace has status "Ready":"False"
	I0806 00:15:26.548288    2086 pod_ready.go:92] pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:26.548295    2086 pod_ready.go:81] duration metric: took 4.505178042s for pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:26.548299    2086 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:28.553583    2086 pod_ready.go:102] pod "etcd-functional-804000" in "kube-system" namespace has status "Ready":"False"
	I0806 00:15:31.053000    2086 pod_ready.go:102] pod "etcd-functional-804000" in "kube-system" namespace has status "Ready":"False"
	I0806 00:15:33.552216    2086 pod_ready.go:102] pod "etcd-functional-804000" in "kube-system" namespace has status "Ready":"False"
	I0806 00:15:35.553436    2086 pod_ready.go:102] pod "etcd-functional-804000" in "kube-system" namespace has status "Ready":"False"
	I0806 00:15:37.552672    2086 pod_ready.go:92] pod "etcd-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:37.552678    2086 pod_ready.go:81] duration metric: took 11.004447417s for pod "etcd-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.552682    2086 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.554731    2086 pod_ready.go:92] pod "kube-apiserver-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:37.554734    2086 pod_ready.go:81] duration metric: took 2.049167ms for pod "kube-apiserver-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.554737    2086 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.556862    2086 pod_ready.go:92] pod "kube-controller-manager-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:37.556865    2086 pod_ready.go:81] duration metric: took 2.126125ms for pod "kube-controller-manager-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.556868    2086 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pk2pn" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.558827    2086 pod_ready.go:92] pod "kube-proxy-pk2pn" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:37.558833    2086 pod_ready.go:81] duration metric: took 1.962541ms for pod "kube-proxy-pk2pn" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.558836    2086 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.560831    2086 pod_ready.go:92] pod "kube-scheduler-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:37.560834    2086 pod_ready.go:81] duration metric: took 1.996541ms for pod "kube-scheduler-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:37.560837    2086 pod_ready.go:38] duration metric: took 15.520256708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:15:37.560844    2086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:15:37.564929    2086 ops.go:34] apiserver oom_adj: -16
	I0806 00:15:37.564932    2086 kubeadm.go:597] duration metric: took 21.737246625s to restartPrimaryControlPlane
	I0806 00:15:37.564936    2086 kubeadm.go:394] duration metric: took 21.746612208s to StartCluster
	I0806 00:15:37.564943    2086 settings.go:142] acquiring lock: {Name:mk345cecdfb5b849013811e238a7c51cfd047298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:15:37.565036    2086 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:15:37.565361    2086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:15:37.565589    2086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:15:37.565594    2086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:15:37.565635    2086 addons.go:69] Setting default-storageclass=true in profile "functional-804000"
	I0806 00:15:37.565628    2086 addons.go:69] Setting storage-provisioner=true in profile "functional-804000"
	I0806 00:15:37.565657    2086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-804000"
	I0806 00:15:37.565666    2086 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:15:37.565675    2086 addons.go:234] Setting addon storage-provisioner=true in "functional-804000"
	W0806 00:15:37.565678    2086 addons.go:243] addon storage-provisioner should already be in state true
	I0806 00:15:37.565686    2086 host.go:66] Checking if "functional-804000" exists ...
	I0806 00:15:37.566570    2086 addons.go:234] Setting addon default-storageclass=true in "functional-804000"
	W0806 00:15:37.566572    2086 addons.go:243] addon default-storageclass should already be in state true
	I0806 00:15:37.566577    2086 host.go:66] Checking if "functional-804000" exists ...
	I0806 00:15:37.569491    2086 out.go:177] * Verifying Kubernetes components...
	I0806 00:15:37.569806    2086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:15:37.573899    2086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:15:37.573906    2086 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
	I0806 00:15:37.577471    2086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:15:37.581475    2086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:15:37.584493    2086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:15:37.584496    2086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:15:37.584501    2086 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
	I0806 00:15:37.681706    2086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:15:37.688917    2086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:15:37.689554    2086 node_ready.go:35] waiting up to 6m0s for node "functional-804000" to be "Ready" ...
	I0806 00:15:37.716189    2086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:15:37.753995    2086 node_ready.go:49] node "functional-804000" has status "Ready":"True"
	I0806 00:15:37.754012    2086 node_ready.go:38] duration metric: took 64.440333ms for node "functional-804000" to be "Ready" ...
	I0806 00:15:37.754015    2086 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:15:37.955449    2086 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:38.006975    2086 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:15:38.015169    2086 addons.go:510] duration metric: took 449.577875ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:15:38.353664    2086 pod_ready.go:92] pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:38.353669    2086 pod_ready.go:81] duration metric: took 398.2165ms for pod "coredns-7db6d8ff4d-rg2bp" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:38.353674    2086 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:38.753557    2086 pod_ready.go:92] pod "etcd-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:38.753562    2086 pod_ready.go:81] duration metric: took 399.88825ms for pod "etcd-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:38.753565    2086 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:39.152668    2086 pod_ready.go:92] pod "kube-apiserver-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:39.152674    2086 pod_ready.go:81] duration metric: took 399.108875ms for pod "kube-apiserver-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:39.152680    2086 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:39.553752    2086 pod_ready.go:92] pod "kube-controller-manager-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:39.553759    2086 pod_ready.go:81] duration metric: took 401.079042ms for pod "kube-controller-manager-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:39.553764    2086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pk2pn" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:39.953571    2086 pod_ready.go:92] pod "kube-proxy-pk2pn" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:39.953576    2086 pod_ready.go:81] duration metric: took 399.812541ms for pod "kube-proxy-pk2pn" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:39.953580    2086 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:40.353663    2086 pod_ready.go:92] pod "kube-scheduler-functional-804000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:15:40.353670    2086 pod_ready.go:81] duration metric: took 400.089458ms for pod "kube-scheduler-functional-804000" in "kube-system" namespace to be "Ready" ...
	I0806 00:15:40.353675    2086 pod_ready.go:38] duration metric: took 2.599672334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:15:40.353686    2086 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:15:40.353774    2086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:15:40.359763    2086 api_server.go:72] duration metric: took 2.794184417s to wait for apiserver process to appear ...
	I0806 00:15:40.359768    2086 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:15:40.359779    2086 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0806 00:15:40.362662    2086 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0806 00:15:40.363144    2086 api_server.go:141] control plane version: v1.30.3
	I0806 00:15:40.363147    2086 api_server.go:131] duration metric: took 3.377625ms to wait for apiserver health ...
	I0806 00:15:40.363150    2086 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:15:40.555576    2086 system_pods.go:59] 7 kube-system pods found
	I0806 00:15:40.555583    2086 system_pods.go:61] "coredns-7db6d8ff4d-rg2bp" [c6434c47-9a46-4655-ade7-511035169466] Running
	I0806 00:15:40.555585    2086 system_pods.go:61] "etcd-functional-804000" [2a322998-c470-443f-a52f-ad82bdc8a3b6] Running
	I0806 00:15:40.555587    2086 system_pods.go:61] "kube-apiserver-functional-804000" [0d36b88d-5550-4e7e-b7b1-ac4eb075a3b4] Running
	I0806 00:15:40.555589    2086 system_pods.go:61] "kube-controller-manager-functional-804000" [da638b83-cd1c-4efd-903d-fa3eb6500da1] Running
	I0806 00:15:40.555590    2086 system_pods.go:61] "kube-proxy-pk2pn" [2a0ddeda-0ed0-4f9f-bf73-01148e40b06b] Running
	I0806 00:15:40.555591    2086 system_pods.go:61] "kube-scheduler-functional-804000" [027d98f9-ea6b-40c7-ad28-b52f6a66098c] Running
	I0806 00:15:40.555592    2086 system_pods.go:61] "storage-provisioner" [e4cc646b-5d5c-47c7-828b-ec853a13964b] Running
	I0806 00:15:40.555594    2086 system_pods.go:74] duration metric: took 192.443583ms to wait for pod list to return data ...
	I0806 00:15:40.555597    2086 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:15:40.753689    2086 default_sa.go:45] found service account: "default"
	I0806 00:15:40.753698    2086 default_sa.go:55] duration metric: took 198.098583ms for default service account to be created ...
	I0806 00:15:40.753701    2086 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:15:40.955342    2086 system_pods.go:86] 7 kube-system pods found
	I0806 00:15:40.955349    2086 system_pods.go:89] "coredns-7db6d8ff4d-rg2bp" [c6434c47-9a46-4655-ade7-511035169466] Running
	I0806 00:15:40.955351    2086 system_pods.go:89] "etcd-functional-804000" [2a322998-c470-443f-a52f-ad82bdc8a3b6] Running
	I0806 00:15:40.955353    2086 system_pods.go:89] "kube-apiserver-functional-804000" [0d36b88d-5550-4e7e-b7b1-ac4eb075a3b4] Running
	I0806 00:15:40.955355    2086 system_pods.go:89] "kube-controller-manager-functional-804000" [da638b83-cd1c-4efd-903d-fa3eb6500da1] Running
	I0806 00:15:40.955356    2086 system_pods.go:89] "kube-proxy-pk2pn" [2a0ddeda-0ed0-4f9f-bf73-01148e40b06b] Running
	I0806 00:15:40.955357    2086 system_pods.go:89] "kube-scheduler-functional-804000" [027d98f9-ea6b-40c7-ad28-b52f6a66098c] Running
	I0806 00:15:40.955358    2086 system_pods.go:89] "storage-provisioner" [e4cc646b-5d5c-47c7-828b-ec853a13964b] Running
	I0806 00:15:40.955361    2086 system_pods.go:126] duration metric: took 201.659084ms to wait for k8s-apps to be running ...
	I0806 00:15:40.955364    2086 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:15:40.955414    2086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:15:40.960951    2086 system_svc.go:56] duration metric: took 5.583708ms WaitForService to wait for kubelet
	I0806 00:15:40.960959    2086 kubeadm.go:582] duration metric: took 3.395383917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:15:40.960968    2086 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:15:41.153606    2086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:15:41.153611    2086 node_conditions.go:123] node cpu capacity is 2
	I0806 00:15:41.153616    2086 node_conditions.go:105] duration metric: took 192.646625ms to run NodePressure ...
	I0806 00:15:41.153621    2086 start.go:241] waiting for startup goroutines ...
	I0806 00:15:41.153625    2086 start.go:246] waiting for cluster config update ...
	I0806 00:15:41.153629    2086 start.go:255] writing updated cluster config ...
	I0806 00:15:41.153943    2086 ssh_runner.go:195] Run: rm -f paused
	I0806 00:15:41.184491    2086 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0806 00:15:41.188691    2086 out.go:177] * Done! kubectl is now configured to use "functional-804000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.550717951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.550726328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.550856238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.892687922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.892719597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.892725224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.892915650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:20 functional-804000 dockerd[6218]: time="2024-08-06T07:16:20.912383149Z" level=info msg="ignoring event" container=03a95aa7e16833e609c3e88476538c3676b8d86ad06246c23fb0924ed1ef3c1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.912594664Z" level=info msg="shim disconnected" id=03a95aa7e16833e609c3e88476538c3676b8d86ad06246c23fb0924ed1ef3c1d namespace=moby
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.912623254Z" level=warning msg="cleaning up after shim disconnected" id=03a95aa7e16833e609c3e88476538c3676b8d86ad06246c23fb0924ed1ef3c1d namespace=moby
	Aug 06 07:16:20 functional-804000 dockerd[6224]: time="2024-08-06T07:16:20.912627381Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 07:16:29 functional-804000 dockerd[6224]: time="2024-08-06T07:16:29.268550782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:16:29 functional-804000 dockerd[6224]: time="2024-08-06T07:16:29.268589084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:16:29 functional-804000 dockerd[6224]: time="2024-08-06T07:16:29.268776717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:29 functional-804000 dockerd[6224]: time="2024-08-06T07:16:29.268813352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:29 functional-804000 cri-dockerd[6479]: time="2024-08-06T07:16:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/432d3ea83773cb70746bf4f5357a061620668b11765330b8c22c3abdb699e098/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:16:30 functional-804000 cri-dockerd[6479]: time="2024-08-06T07:16:30Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 06 07:16:30 functional-804000 dockerd[6224]: time="2024-08-06T07:16:30.384053724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:16:30 functional-804000 dockerd[6224]: time="2024-08-06T07:16:30.384091276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:16:30 functional-804000 dockerd[6224]: time="2024-08-06T07:16:30.384404441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:30 functional-804000 dockerd[6224]: time="2024-08-06T07:16:30.384502092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:16:30 functional-804000 dockerd[6218]: time="2024-08-06T07:16:30.426541998Z" level=info msg="ignoring event" container=d27ab7ed49d54a6e8a82fedfc4ee79fa04cc3e555431e705675dd6cf92c3a66b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:16:30 functional-804000 dockerd[6224]: time="2024-08-06T07:16:30.426686078Z" level=info msg="shim disconnected" id=d27ab7ed49d54a6e8a82fedfc4ee79fa04cc3e555431e705675dd6cf92c3a66b namespace=moby
	Aug 06 07:16:30 functional-804000 dockerd[6224]: time="2024-08-06T07:16:30.426754221Z" level=warning msg="cleaning up after shim disconnected" id=d27ab7ed49d54a6e8a82fedfc4ee79fa04cc3e555431e705675dd6cf92c3a66b namespace=moby
	Aug 06 07:16:30 functional-804000 dockerd[6224]: time="2024-08-06T07:16:30.426759014Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d27ab7ed49d54       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   1 second ago         Exited              mount-munger              0                   432d3ea83773c       busybox-mount
	03a95aa7e1683       72565bf5bbedf                                                                                         11 seconds ago       Exited              echoserver-arm            2                   b6490bdfec728       hello-node-connect-6f49f58cd5-q6fxv
	ccec03f570d2c       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         11 seconds ago       Running             myfrontend                0                   f1bff31ae8f1b       sp-pod
	7eada953e22c1       72565bf5bbedf                                                                                         25 seconds ago       Exited              echoserver-arm            2                   a2db9de4c192b       hello-node-65f5d5cc78-pdwsk
	f9d8e53a9beb4       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         35 seconds ago       Running             nginx                     0                   f141dcee5ce0f       nginx-svc
	731b731a7881c       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   66af76c7c6490       coredns-7db6d8ff4d-rg2bp
	c83948fedda67       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   2ceef8c5c9a4c       storage-provisioner
	b17981e5e2ec4       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   f827b06190250       kube-proxy-pk2pn
	539fa558137f4       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   5b19002575489       etcd-functional-804000
	2e13405d5fd45       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   bc5ec1c42b20a       kube-controller-manager-functional-804000
	9e20bc5802023       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   13dd59dde781b       kube-scheduler-functional-804000
	f18a3a3074476       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   7e5bd17824733       kube-apiserver-functional-804000
	f1071e7b81bd5       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   71754b78bc8e3       storage-provisioner
	40cd843ba2e82       2437cf7621777                                                                                         About a minute ago   Exited              coredns                   1                   92867b0c502eb       coredns-7db6d8ff4d-rg2bp
	24a4f125b1176       2351f570ed0ea                                                                                         About a minute ago   Exited              kube-proxy                1                   b78d2f010e240       kube-proxy-pk2pn
	acb9282de6066       014faa467e297                                                                                         About a minute ago   Exited              etcd                      1                   14a1e7625b481       etcd-functional-804000
	0339ddfa14224       8e97cdb19e7cc                                                                                         About a minute ago   Exited              kube-controller-manager   1                   e90ba0078c70e       kube-controller-manager-functional-804000
	bed826661f4d0       d48f992a22722                                                                                         About a minute ago   Exited              kube-scheduler            1                   372310c1d2381       kube-scheduler-functional-804000
	
	
	==> coredns [40cd843ba2e8] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59391 - 65447 "HINFO IN 7638894785896708754.8937028514557996437. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008868809s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [731b731a7881] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34579 - 43732 "HINFO IN 6290292380580534544.2269994636904450355. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00896604s
	[INFO] 10.244.0.1:28927 - 40911 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000105197s
	[INFO] 10.244.0.1:40712 - 63968 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096111s
	[INFO] 10.244.0.1:49950 - 33617 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000914343s
	[INFO] 10.244.0.1:5882 - 38944 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000071687s
	[INFO] 10.244.0.1:38898 - 7737 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000067478s
	[INFO] 10.244.0.1:10467 - 38836 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000241027s
	
	
	==> describe nodes <==
	Name:               functional-804000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-804000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=functional-804000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_13_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:13:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-804000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:16:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:16:21 +0000   Tue, 06 Aug 2024 07:13:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:16:21 +0000   Tue, 06 Aug 2024 07:13:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:16:21 +0000   Tue, 06 Aug 2024 07:13:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:16:21 +0000   Tue, 06 Aug 2024 07:13:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-804000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0fbfb02cfc64e0b9769a6cb2005d60a
	  System UUID:                d0fbfb02cfc64e0b9769a6cb2005d60a
	  Boot ID:                    dc8b5671-32f8-4958-880b-4960e9af4e71
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  default                     hello-node-65f5d5cc78-pdwsk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  default                     hello-node-connect-6f49f58cd5-q6fxv          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 coredns-7db6d8ff4d-rg2bp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m57s
	  kube-system                 etcd-functional-804000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m11s
	  kube-system                 kube-apiserver-functional-804000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-functional-804000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kube-proxy-pk2pn                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-scheduler-functional-804000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m56s                  kube-proxy       
	  Normal  Starting                 70s                    kube-proxy       
	  Normal  Starting                 116s                   kube-proxy       
	  Normal  Starting                 3m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m15s (x8 over 3m15s)  kubelet          Node functional-804000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x8 over 3m15s)  kubelet          Node functional-804000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x7 over 3m15s)  kubelet          Node functional-804000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  3m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m11s (x2 over 3m11s)  kubelet          Node functional-804000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m11s (x2 over 3m11s)  kubelet          Node functional-804000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m11s (x2 over 3m11s)  kubelet          Node functional-804000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m11s                  kubelet          Starting kubelet.
	  Normal  NodeReady                3m7s                   kubelet          Node functional-804000 status is now: NodeReady
	  Normal  RegisteredNode           2m58s                  node-controller  Node functional-804000 event: Registered Node functional-804000 in Controller
	  Normal  Starting                 2m                     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    119s (x8 over 2m)      kubelet          Node functional-804000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x7 over 2m)      kubelet          Node functional-804000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  119s (x8 over 2m)      kubelet          Node functional-804000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           104s                   node-controller  Node functional-804000 event: Registered Node functional-804000 in Controller
	  Normal  Starting                 75s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s (x8 over 75s)      kubelet          Node functional-804000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x8 over 75s)      kubelet          Node functional-804000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x7 over 75s)      kubelet          Node functional-804000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           59s                    node-controller  Node functional-804000 event: Registered Node functional-804000 in Controller
	
	
	==> dmesg <==
	[  +0.817184] systemd-fstab-generator[4354]: Ignoring "noauto" option for root device
	[  +3.564866] kauditd_printk_skb: 215 callbacks suppressed
	[ +12.155806] kauditd_printk_skb: 15 callbacks suppressed
	[  +4.768625] systemd-fstab-generator[5327]: Ignoring "noauto" option for root device
	[Aug 6 07:15] systemd-fstab-generator[5751]: Ignoring "noauto" option for root device
	[  +0.053465] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.121668] systemd-fstab-generator[5784]: Ignoring "noauto" option for root device
	[  +0.104871] systemd-fstab-generator[5797]: Ignoring "noauto" option for root device
	[  +0.123483] systemd-fstab-generator[5811]: Ignoring "noauto" option for root device
	[  +5.098997] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.351800] systemd-fstab-generator[6432]: Ignoring "noauto" option for root device
	[  +0.097113] systemd-fstab-generator[6444]: Ignoring "noauto" option for root device
	[  +0.086387] systemd-fstab-generator[6456]: Ignoring "noauto" option for root device
	[  +0.101015] systemd-fstab-generator[6471]: Ignoring "noauto" option for root device
	[  +0.232075] systemd-fstab-generator[6640]: Ignoring "noauto" option for root device
	[  +1.044452] systemd-fstab-generator[6763]: Ignoring "noauto" option for root device
	[  +1.229443] kauditd_printk_skb: 189 callbacks suppressed
	[ +14.881730] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.838047] systemd-fstab-generator[7757]: Ignoring "noauto" option for root device
	[  +5.029775] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.800767] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.210137] kauditd_printk_skb: 16 callbacks suppressed
	[Aug 6 07:16] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.498341] kauditd_printk_skb: 38 callbacks suppressed
	[ +17.932804] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [539fa558137f] <==
	{"level":"info","ts":"2024-08-06T07:15:18.105015Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T07:15:18.105034Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T07:15:18.105132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-06T07:15:18.10518Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-06T07:15:18.105241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:15:18.105267Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:15:18.107204Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T07:15:18.107282Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-06T07:15:18.107312Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-06T07:15:18.108218Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:15:18.108268Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:15:19.901681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-06T07:15:19.902073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-06T07:15:19.902259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-06T07:15:19.902383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-06T07:15:19.902539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-06T07:15:19.902696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-06T07:15:19.902779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-06T07:15:19.908163Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-804000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:15:19.9083Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:15:19.908702Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:15:19.908743Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:15:19.908775Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:15:19.912143Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:15:19.912184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [acb9282de606] <==
	{"level":"info","ts":"2024-08-06T07:14:32.679998Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:14:33.858672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T07:14:33.858843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:14:33.85891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-06T07:14:33.858945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T07:14:33.859003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-06T07:14:33.859272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T07:14:33.85945Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-06T07:14:33.864641Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-804000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:14:33.864654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:14:33.865526Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:14:33.865578Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:14:33.864694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:14:33.86996Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-06T07:14:33.870178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:15:02.840236Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-06T07:15:02.840274Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-804000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-06T07:15:02.840315Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:15:02.840374Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:15:02.84908Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:15:02.849101Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T07:15:02.849121Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-06T07:15:02.853113Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-06T07:15:02.853179Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-06T07:15:02.853184Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-804000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 07:16:31 up 3 min,  0 users,  load average: 0.68, 0.38, 0.16
	Linux functional-804000 5.10.207 #1 SMP PREEMPT Mon Jul 29 12:07:32 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f18a3a307447] <==
	I0806 07:15:20.552881       1 autoregister_controller.go:141] Starting autoregister controller
	I0806 07:15:20.552905       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 07:15:20.552924       1 cache.go:39] Caches are synced for autoregister controller
	I0806 07:15:20.554562       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 07:15:20.554592       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 07:15:20.554611       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0806 07:15:20.554942       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0806 07:15:20.555185       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0806 07:15:20.578229       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:15:20.578260       1 policy_source.go:224] refreshing policies
	I0806 07:15:20.578234       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 07:15:20.599464       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:15:21.455324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:15:21.949181       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:15:21.952892       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:15:21.962842       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:15:21.970071       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:15:21.972015       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:15:32.882539       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:15:32.937466       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:15:42.751756       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.109.140"}
	I0806 07:15:48.508963       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0806 07:15:48.552316       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.69.180"}
	I0806 07:15:53.437314       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.39.127"}
	I0806 07:16:03.837884       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.188.173"}
	
	
	==> kube-controller-manager [0339ddfa1422] <==
	I0806 07:14:47.552563       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0806 07:14:47.552766       1 shared_informer.go:320] Caches are synced for GC
	I0806 07:14:47.576840       1 shared_informer.go:320] Caches are synced for deployment
	I0806 07:14:47.576867       1 shared_informer.go:320] Caches are synced for PV protection
	I0806 07:14:47.577175       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0806 07:14:47.577225       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0806 07:14:47.577402       1 shared_informer.go:320] Caches are synced for endpoint
	I0806 07:14:47.582097       1 shared_informer.go:320] Caches are synced for node
	I0806 07:14:47.582151       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0806 07:14:47.582195       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0806 07:14:47.582205       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0806 07:14:47.582231       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0806 07:14:47.583501       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0806 07:14:47.627855       1 shared_informer.go:320] Caches are synced for daemon sets
	I0806 07:14:47.665015       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:14:47.685685       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:14:47.778091       1 shared_informer.go:320] Caches are synced for attach detach
	I0806 07:14:47.827353       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0806 07:14:47.827361       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0806 07:14:47.827369       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0806 07:14:47.827466       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0806 07:14:47.827471       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0806 07:14:48.196839       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:14:48.227113       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:14:48.227126       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [2e13405d5fd4] <==
	I0806 07:15:32.953775       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0806 07:15:33.140005       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:15:33.150136       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0806 07:15:33.155642       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:15:33.567901       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:15:33.585231       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:15:33.585245       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:15:48.520988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="10.656058ms"
	I0806 07:15:48.523421       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="2.409775ms"
	I0806 07:15:48.523508       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.381µs"
	I0806 07:15:48.523566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="21.381µs"
	I0806 07:15:48.528496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="18.964µs"
	I0806 07:15:54.187331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="17.255µs"
	I0806 07:15:55.200932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="25.007µs"
	I0806 07:15:56.206199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.215µs"
	I0806 07:16:03.800154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="8.045498ms"
	I0806 07:16:03.804847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="4.667161ms"
	I0806 07:16:03.804993       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="40.512µs"
	I0806 07:16:03.809903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.09µs"
	I0806 07:16:05.264552       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="26.966µs"
	I0806 07:16:06.270720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="23.298µs"
	I0806 07:16:07.279167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.132µs"
	I0806 07:16:07.284306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="42.596µs"
	I0806 07:16:21.367924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.048µs"
	I0806 07:16:21.867753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="23.757µs"
	
	
	==> kube-proxy [24a4f125b117] <==
	I0806 07:14:35.569730       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:14:35.576935       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0806 07:14:35.606441       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:14:35.606469       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:14:35.606480       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:14:35.607195       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:14:35.607269       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:14:35.607274       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:14:35.608103       1 config.go:192] "Starting service config controller"
	I0806 07:14:35.608107       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:14:35.608117       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:14:35.608119       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:14:35.608235       1 config.go:319] "Starting node config controller"
	I0806 07:14:35.608238       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:14:35.709755       1 shared_informer.go:320] Caches are synced for node config
	I0806 07:14:35.709755       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:14:35.709817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b17981e5e2ec] <==
	I0806 07:15:21.382846       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:15:21.388310       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0806 07:15:21.400205       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:15:21.400234       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:15:21.400244       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:15:21.401642       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:15:21.401724       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:15:21.401733       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:15:21.402101       1 config.go:192] "Starting service config controller"
	I0806 07:15:21.402111       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:15:21.402121       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:15:21.402143       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:15:21.402343       1 config.go:319] "Starting node config controller"
	I0806 07:15:21.402351       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:15:21.502543       1 shared_informer.go:320] Caches are synced for node config
	I0806 07:15:21.502610       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:15:21.502660       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9e20bc580202] <==
	I0806 07:15:18.007381       1 serving.go:380] Generated self-signed cert in-memory
	W0806 07:15:20.480051       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 07:15:20.480100       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:15:20.480111       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 07:15:20.480118       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 07:15:20.505794       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 07:15:20.505892       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:15:20.506627       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 07:15:20.506640       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 07:15:20.506737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 07:15:20.506778       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 07:15:20.607048       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bed826661f4d] <==
	E0806 07:14:34.443534       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0806 07:14:34.443572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:14:34.443581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:14:34.443604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0806 07:14:34.443612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0806 07:14:34.443647       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:14:34.443654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:14:34.443674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:14:34.443679       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:14:34.443702       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:14:34.443709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:14:34.443740       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:14:34.443749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:14:34.443771       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:14:34.443776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:14:34.443818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0806 07:14:34.443827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0806 07:14:34.443868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:14:34.443892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:14:34.443908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0806 07:14:34.443915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0806 07:14:34.443929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:14:34.443937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0806 07:14:34.536934       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0806 07:15:02.838870       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 06 07:16:18 functional-804000 kubelet[6770]: I0806 07:16:18.547225    6770 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/a361b88a-9b92-49d9-8b12-68450e7788a9-pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935\") pod \"a361b88a-9b92-49d9-8b12-68450e7788a9\" (UID: \"a361b88a-9b92-49d9-8b12-68450e7788a9\") "
	Aug 06 07:16:18 functional-804000 kubelet[6770]: I0806 07:16:18.547259    6770 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a361b88a-9b92-49d9-8b12-68450e7788a9-pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935" (OuterVolumeSpecName: "mypd") pod "a361b88a-9b92-49d9-8b12-68450e7788a9" (UID: "a361b88a-9b92-49d9-8b12-68450e7788a9"). InnerVolumeSpecName "pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 06 07:16:18 functional-804000 kubelet[6770]: I0806 07:16:18.550006    6770 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a361b88a-9b92-49d9-8b12-68450e7788a9-kube-api-access-pnskz" (OuterVolumeSpecName: "kube-api-access-pnskz") pod "a361b88a-9b92-49d9-8b12-68450e7788a9" (UID: "a361b88a-9b92-49d9-8b12-68450e7788a9"). InnerVolumeSpecName "kube-api-access-pnskz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 06 07:16:18 functional-804000 kubelet[6770]: I0806 07:16:18.648134    6770 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pnskz\" (UniqueName: \"kubernetes.io/projected/a361b88a-9b92-49d9-8b12-68450e7788a9-kube-api-access-pnskz\") on node \"functional-804000\" DevicePath \"\""
	Aug 06 07:16:18 functional-804000 kubelet[6770]: I0806 07:16:18.648149    6770 reconciler_common.go:289] "Volume detached for volume \"pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935\" (UniqueName: \"kubernetes.io/host-path/a361b88a-9b92-49d9-8b12-68450e7788a9-pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935\") on node \"functional-804000\" DevicePath \"\""
	Aug 06 07:16:19 functional-804000 kubelet[6770]: I0806 07:16:19.344298    6770 scope.go:117] "RemoveContainer" containerID="c7fdfe90caf58024dd1bd79a9ab87d565fd727085b1e44466f2e78cf0c7f4799"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: I0806 07:16:19.353391    6770 scope.go:117] "RemoveContainer" containerID="c7fdfe90caf58024dd1bd79a9ab87d565fd727085b1e44466f2e78cf0c7f4799"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: E0806 07:16:19.353940    6770 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c7fdfe90caf58024dd1bd79a9ab87d565fd727085b1e44466f2e78cf0c7f4799" containerID="c7fdfe90caf58024dd1bd79a9ab87d565fd727085b1e44466f2e78cf0c7f4799"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: I0806 07:16:19.353959    6770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c7fdfe90caf58024dd1bd79a9ab87d565fd727085b1e44466f2e78cf0c7f4799"} err="failed to get container status \"c7fdfe90caf58024dd1bd79a9ab87d565fd727085b1e44466f2e78cf0c7f4799\": rpc error: code = Unknown desc = Error response from daemon: No such container: c7fdfe90caf58024dd1bd79a9ab87d565fd727085b1e44466f2e78cf0c7f4799"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: I0806 07:16:19.421997    6770 topology_manager.go:215] "Topology Admit Handler" podUID="de769b8e-374b-43c6-b6d8-b64ab9ff1848" podNamespace="default" podName="sp-pod"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: E0806 07:16:19.422041    6770 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a361b88a-9b92-49d9-8b12-68450e7788a9" containerName="myfrontend"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: I0806 07:16:19.422068    6770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a361b88a-9b92-49d9-8b12-68450e7788a9" containerName="myfrontend"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: I0806 07:16:19.553548    6770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmgv\" (UniqueName: \"kubernetes.io/projected/de769b8e-374b-43c6-b6d8-b64ab9ff1848-kube-api-access-jvmgv\") pod \"sp-pod\" (UID: \"de769b8e-374b-43c6-b6d8-b64ab9ff1848\") " pod="default/sp-pod"
	Aug 06 07:16:19 functional-804000 kubelet[6770]: I0806 07:16:19.553569    6770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935\" (UniqueName: \"kubernetes.io/host-path/de769b8e-374b-43c6-b6d8-b64ab9ff1848-pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935\") pod \"sp-pod\" (UID: \"de769b8e-374b-43c6-b6d8-b64ab9ff1848\") " pod="default/sp-pod"
	Aug 06 07:16:20 functional-804000 kubelet[6770]: I0806 07:16:20.864170    6770 scope.go:117] "RemoveContainer" containerID="3bbffc8976b0cbb78ee335165bb429faf43b4637b60822edfd766dc9783eed33"
	Aug 06 07:16:20 functional-804000 kubelet[6770]: I0806 07:16:20.868983    6770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a361b88a-9b92-49d9-8b12-68450e7788a9" path="/var/lib/kubelet/pods/a361b88a-9b92-49d9-8b12-68450e7788a9/volumes"
	Aug 06 07:16:21 functional-804000 kubelet[6770]: I0806 07:16:21.359859    6770 scope.go:117] "RemoveContainer" containerID="3bbffc8976b0cbb78ee335165bb429faf43b4637b60822edfd766dc9783eed33"
	Aug 06 07:16:21 functional-804000 kubelet[6770]: I0806 07:16:21.360142    6770 scope.go:117] "RemoveContainer" containerID="03a95aa7e16833e609c3e88476538c3676b8d86ad06246c23fb0924ed1ef3c1d"
	Aug 06 07:16:21 functional-804000 kubelet[6770]: E0806 07:16:21.360234    6770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-q6fxv_default(645fb0eb-f0e9-4b2d-89fe-49e5c01d0c81)\"" pod="default/hello-node-connect-6f49f58cd5-q6fxv" podUID="645fb0eb-f0e9-4b2d-89fe-49e5c01d0c81"
	Aug 06 07:16:21 functional-804000 kubelet[6770]: I0806 07:16:21.863508    6770 scope.go:117] "RemoveContainer" containerID="7eada953e22c1a443e8b1968804c143eaea76535807ff2942f658fc38c698937"
	Aug 06 07:16:21 functional-804000 kubelet[6770]: E0806 07:16:21.863597    6770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-pdwsk_default(114b0f17-fdda-44e5-a6c6-12295b2583e8)\"" pod="default/hello-node-65f5d5cc78-pdwsk" podUID="114b0f17-fdda-44e5-a6c6-12295b2583e8"
	Aug 06 07:16:21 functional-804000 kubelet[6770]: I0806 07:16:21.867447    6770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.145839803 podStartE2EDuration="2.867436947s" podCreationTimestamp="2024-08-06 07:16:19 +0000 UTC" firstStartedPulling="2024-08-06 07:16:19.804517435 +0000 UTC m=+62.989961421" lastFinishedPulling="2024-08-06 07:16:20.526114578 +0000 UTC m=+63.711558565" observedRunningTime="2024-08-06 07:16:21.371350803 +0000 UTC m=+64.556794790" watchObservedRunningTime="2024-08-06 07:16:21.867436947 +0000 UTC m=+65.052880934"
	Aug 06 07:16:28 functional-804000 kubelet[6770]: I0806 07:16:28.936143    6770 topology_manager.go:215] "Topology Admit Handler" podUID="b12365c6-b36a-4209-866c-ce843ababa8a" podNamespace="default" podName="busybox-mount"
	Aug 06 07:16:29 functional-804000 kubelet[6770]: I0806 07:16:29.007990    6770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b12365c6-b36a-4209-866c-ce843ababa8a-test-volume\") pod \"busybox-mount\" (UID: \"b12365c6-b36a-4209-866c-ce843ababa8a\") " pod="default/busybox-mount"
	Aug 06 07:16:29 functional-804000 kubelet[6770]: I0806 07:16:29.008012    6770 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmqt\" (UniqueName: \"kubernetes.io/projected/b12365c6-b36a-4209-866c-ce843ababa8a-kube-api-access-9bmqt\") pod \"busybox-mount\" (UID: \"b12365c6-b36a-4209-866c-ce843ababa8a\") " pod="default/busybox-mount"
	
	
	==> storage-provisioner [c83948fedda6] <==
	I0806 07:15:21.351693       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:15:21.356800       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:15:21.356821       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 07:15:38.744388       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 07:15:38.744623       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-804000_eb172b58-3c64-4963-baa7-efbf323379b9!
	I0806 07:15:38.745023       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"76d4a196-7fb1-41d6-bf51-472813c3b5bb", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-804000_eb172b58-3c64-4963-baa7-efbf323379b9 became leader
	I0806 07:15:38.845627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-804000_eb172b58-3c64-4963-baa7-efbf323379b9!
	I0806 07:16:06.106443       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0806 07:16:06.106593       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    a2a4ed75-e0f2-4e81-9179-41eb65fc1826 389 0 2024-08-06 07:13:35 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-06 07:13:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  eff4dc0f-efea-4ba7-a666-7bdd42d47935 803 0 2024-08-06 07:16:06 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-06 07:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-06 07:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0806 07:16:06.107081       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"eff4dc0f-efea-4ba7-a666-7bdd42d47935", APIVersion:"v1", ResourceVersion:"803", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0806 07:16:06.107970       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935" provisioned
	I0806 07:16:06.108369       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0806 07:16:06.108410       1 volume_store.go:212] Trying to save persistentvolume "pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935"
	I0806 07:16:06.113923       1 volume_store.go:219] persistentvolume "pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935" saved
	I0806 07:16:06.114070       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"eff4dc0f-efea-4ba7-a666-7bdd42d47935", APIVersion:"v1", ResourceVersion:"803", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-eff4dc0f-efea-4ba7-a666-7bdd42d47935
	
	
	==> storage-provisioner [f1071e7b81bd] <==
	I0806 07:14:49.006437       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:14:49.010425       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:14:49.010446       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-804000 -n functional-804000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-804000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-804000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-804000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-804000/192.168.105.4
	Start Time:       Tue, 06 Aug 2024 00:16:28 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://d27ab7ed49d54a6e8a82fedfc4ee79fa04cc3e555431e705675dd6cf92c3a66b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 06 Aug 2024 00:16:30 -0700
	      Finished:     Tue, 06 Aug 2024 00:16:30 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9bmqt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-9bmqt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/busybox-mount to functional-804000
	  Normal  Pulling    3s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.02s (1.02s including waiting). Image size: 3547125 bytes.
	  Normal  Created    2s    kubelet            Created container mount-munger
	  Normal  Started    2s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (28.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 node stop m02 -v=7 --alsologtostderr
E0806 00:21:08.916080    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-597000 node stop m02 -v=7 --alsologtostderr: (12.18897s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr
E0806 00:21:29.398177    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:22:10.359952    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:23:32.281235    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:23:35.458102    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr: exit status 7 (3m45.043209541s)

                                                
                                                
-- stdout --
	ha-597000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:21:20.218720    2790 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:21:20.218884    2790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:21:20.218887    2790 out.go:304] Setting ErrFile to fd 2...
	I0806 00:21:20.218890    2790 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:21:20.219048    2790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:21:20.219158    2790 out.go:298] Setting JSON to false
	I0806 00:21:20.219169    2790 mustload.go:65] Loading cluster: ha-597000
	I0806 00:21:20.219261    2790 notify.go:220] Checking for updates...
	I0806 00:21:20.219411    2790 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:21:20.219419    2790 status.go:255] checking status of ha-597000 ...
	I0806 00:21:20.220136    2790 status.go:330] ha-597000 host status = "Running" (err=<nil>)
	I0806 00:21:20.220145    2790 host.go:66] Checking if "ha-597000" exists ...
	I0806 00:21:20.220246    2790 host.go:66] Checking if "ha-597000" exists ...
	I0806 00:21:20.220355    2790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:21:20.220364    2790 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/id_rsa Username:docker}
	W0806 00:22:35.222116    2790 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0806 00:22:35.222221    2790 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0806 00:22:35.222232    2790 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0806 00:22:35.222237    2790 status.go:257] ha-597000 status: &{Name:ha-597000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 00:22:35.222251    2790 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0806 00:22:35.222255    2790 status.go:255] checking status of ha-597000-m02 ...
	I0806 00:22:35.222512    2790 status.go:330] ha-597000-m02 host status = "Stopped" (err=<nil>)
	I0806 00:22:35.222518    2790 status.go:343] host is not running, skipping remaining checks
	I0806 00:22:35.222520    2790 status.go:257] ha-597000-m02 status: &{Name:ha-597000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:22:35.222524    2790 status.go:255] checking status of ha-597000-m03 ...
	I0806 00:22:35.223176    2790 status.go:330] ha-597000-m03 host status = "Running" (err=<nil>)
	I0806 00:22:35.223184    2790 host.go:66] Checking if "ha-597000-m03" exists ...
	I0806 00:22:35.223300    2790 host.go:66] Checking if "ha-597000-m03" exists ...
	I0806 00:22:35.223433    2790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:22:35.223440    2790 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m03/id_rsa Username:docker}
	W0806 00:23:50.224532    2790 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0806 00:23:50.224588    2790 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0806 00:23:50.224597    2790 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0806 00:23:50.224601    2790 status.go:257] ha-597000-m03 status: &{Name:ha-597000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 00:23:50.224612    2790 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0806 00:23:50.224616    2790 status.go:255] checking status of ha-597000-m04 ...
	I0806 00:23:50.225450    2790 status.go:330] ha-597000-m04 host status = "Running" (err=<nil>)
	I0806 00:23:50.225459    2790 host.go:66] Checking if "ha-597000-m04" exists ...
	I0806 00:23:50.225569    2790 host.go:66] Checking if "ha-597000-m04" exists ...
	I0806 00:23:50.225697    2790 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:23:50.225703    2790 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m04/id_rsa Username:docker}
	W0806 00:25:05.227094    2790 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0806 00:25:05.227134    2790 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0806 00:25:05.227141    2790 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0806 00:25:05.227145    2790 status.go:257] ha-597000-m04 status: &{Name:ha-597000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0806 00:25:05.227153    2790 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr": ha-597000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-597000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-597000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-597000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr": ha-597000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-597000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-597000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-597000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr": ha-597000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-597000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-597000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-597000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000
E0806 00:25:48.419747    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:26:16.121661    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000: exit status 3 (1m15.034933709s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:26:20.260732    2872 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0806 00:26:20.260749    2872 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-597000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0806 00:28:35.454818    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.071299292s)
ha_test.go:413: expected profile "ha-597000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-597000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-597000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-597000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000
E0806 00:29:58.521730    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000: exit status 3 (1m15.035502166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:30:05.366238    2952 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0806 00:30:05.366245    2952 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-597000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-597000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.101737333s)

                                                
                                                
-- stdout --
	* Starting "ha-597000-m02" control-plane node in "ha-597000" cluster
	* Restarting existing qemu2 VM for "ha-597000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-597000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:30:05.398083    3262 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:30:05.398370    3262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:30:05.398376    3262 out.go:304] Setting ErrFile to fd 2...
	I0806 00:30:05.398379    3262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:30:05.398499    3262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:30:05.398732    3262 mustload.go:65] Loading cluster: ha-597000
	I0806 00:30:05.398979    3262 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0806 00:30:05.399217    3262 host.go:58] "ha-597000-m02" host status: Stopped
	I0806 00:30:05.403799    3262 out.go:177] * Starting "ha-597000-m02" control-plane node in "ha-597000" cluster
	I0806 00:30:05.408737    3262 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:30:05.408748    3262 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:30:05.408753    3262 cache.go:56] Caching tarball of preloaded images
	I0806 00:30:05.408815    3262 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:30:05.408820    3262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:30:05.408874    3262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/ha-597000/config.json ...
	I0806 00:30:05.409519    3262 start.go:360] acquireMachinesLock for ha-597000-m02: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:30:05.409558    3262 start.go:364] duration metric: took 26.041µs to acquireMachinesLock for "ha-597000-m02"
	I0806 00:30:05.409568    3262 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:30:05.409575    3262 fix.go:54] fixHost starting: m02
	I0806 00:30:05.409665    3262 fix.go:112] recreateIfNeeded on ha-597000-m02: state=Stopped err=<nil>
	W0806 00:30:05.409670    3262 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:30:05.412666    3262 out.go:177] * Restarting existing qemu2 VM for "ha-597000-m02" ...
	I0806 00:30:05.416738    3262 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:30:05.416778    3262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:61:3f:0a:e1:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/disk.qcow2
	I0806 00:30:05.418912    3262 main.go:141] libmachine: STDOUT: 
	I0806 00:30:05.418931    3262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:30:05.418960    3262 fix.go:56] duration metric: took 9.385791ms for fixHost
	I0806 00:30:05.418964    3262 start.go:83] releasing machines lock for "ha-597000-m02", held for 9.402541ms
	W0806 00:30:05.418969    3262 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:30:05.419003    3262 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:30:05.419007    3262 start.go:729] Will try again in 5 seconds ...
	I0806 00:30:10.421222    3262 start.go:360] acquireMachinesLock for ha-597000-m02: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:30:10.421796    3262 start.go:364] duration metric: took 420.875µs to acquireMachinesLock for "ha-597000-m02"
	I0806 00:30:10.421944    3262 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:30:10.421967    3262 fix.go:54] fixHost starting: m02
	I0806 00:30:10.422706    3262 fix.go:112] recreateIfNeeded on ha-597000-m02: state=Stopped err=<nil>
	W0806 00:30:10.422733    3262 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:30:10.427643    3262 out.go:177] * Restarting existing qemu2 VM for "ha-597000-m02" ...
	I0806 00:30:10.432628    3262 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:30:10.432874    3262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:61:3f:0a:e1:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/disk.qcow2
	I0806 00:30:10.442067    3262 main.go:141] libmachine: STDOUT: 
	I0806 00:30:10.442147    3262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:30:10.442225    3262 fix.go:56] duration metric: took 20.262542ms for fixHost
	I0806 00:30:10.442246    3262 start.go:83] releasing machines lock for "ha-597000-m02", held for 20.421208ms
	W0806 00:30:10.442459    3262 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-597000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-597000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:30:10.446630    3262 out.go:177] 
	W0806 00:30:10.450705    3262 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:30:10.450726    3262 out.go:239] * 
	* 
	W0806 00:30:10.457567    3262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:30:10.461661    3262 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0806 00:30:05.398083    3262 out.go:291] Setting OutFile to fd 1 ...
I0806 00:30:05.398370    3262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:30:05.398376    3262 out.go:304] Setting ErrFile to fd 2...
I0806 00:30:05.398379    3262 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:30:05.398499    3262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
I0806 00:30:05.398732    3262 mustload.go:65] Loading cluster: ha-597000
I0806 00:30:05.398979    3262 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0806 00:30:05.399217    3262 host.go:58] "ha-597000-m02" host status: Stopped
I0806 00:30:05.403799    3262 out.go:177] * Starting "ha-597000-m02" control-plane node in "ha-597000" cluster
I0806 00:30:05.408737    3262 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0806 00:30:05.408748    3262 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0806 00:30:05.408753    3262 cache.go:56] Caching tarball of preloaded images
I0806 00:30:05.408815    3262 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0806 00:30:05.408820    3262 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0806 00:30:05.408874    3262 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/ha-597000/config.json ...
I0806 00:30:05.409519    3262 start.go:360] acquireMachinesLock for ha-597000-m02: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0806 00:30:05.409558    3262 start.go:364] duration metric: took 26.041µs to acquireMachinesLock for "ha-597000-m02"
I0806 00:30:05.409568    3262 start.go:96] Skipping create...Using existing machine configuration
I0806 00:30:05.409575    3262 fix.go:54] fixHost starting: m02
I0806 00:30:05.409665    3262 fix.go:112] recreateIfNeeded on ha-597000-m02: state=Stopped err=<nil>
W0806 00:30:05.409670    3262 fix.go:138] unexpected machine state, will restart: <nil>
I0806 00:30:05.412666    3262 out.go:177] * Restarting existing qemu2 VM for "ha-597000-m02" ...
I0806 00:30:05.416738    3262 qemu.go:418] Using hvf for hardware acceleration
I0806 00:30:05.416778    3262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:61:3f:0a:e1:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/disk.qcow2
I0806 00:30:05.418912    3262 main.go:141] libmachine: STDOUT: 
I0806 00:30:05.418931    3262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0806 00:30:05.418960    3262 fix.go:56] duration metric: took 9.385791ms for fixHost
I0806 00:30:05.418964    3262 start.go:83] releasing machines lock for "ha-597000-m02", held for 9.402541ms
W0806 00:30:05.418969    3262 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0806 00:30:05.419003    3262 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0806 00:30:05.419007    3262 start.go:729] Will try again in 5 seconds ...
I0806 00:30:10.421222    3262 start.go:360] acquireMachinesLock for ha-597000-m02: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0806 00:30:10.421796    3262 start.go:364] duration metric: took 420.875µs to acquireMachinesLock for "ha-597000-m02"
I0806 00:30:10.421944    3262 start.go:96] Skipping create...Using existing machine configuration
I0806 00:30:10.421967    3262 fix.go:54] fixHost starting: m02
I0806 00:30:10.422706    3262 fix.go:112] recreateIfNeeded on ha-597000-m02: state=Stopped err=<nil>
W0806 00:30:10.422733    3262 fix.go:138] unexpected machine state, will restart: <nil>
I0806 00:30:10.427643    3262 out.go:177] * Restarting existing qemu2 VM for "ha-597000-m02" ...
I0806 00:30:10.432628    3262 qemu.go:418] Using hvf for hardware acceleration
I0806 00:30:10.432874    3262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:61:3f:0a:e1:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/disk.qcow2
I0806 00:30:10.442067    3262 main.go:141] libmachine: STDOUT: 
I0806 00:30:10.442147    3262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0806 00:30:10.442225    3262 fix.go:56] duration metric: took 20.262542ms for fixHost
I0806 00:30:10.442246    3262 start.go:83] releasing machines lock for "ha-597000-m02", held for 20.421208ms
W0806 00:30:10.442459    3262 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-597000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-597000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0806 00:30:10.446630    3262 out.go:177] 
W0806 00:30:10.450705    3262 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0806 00:30:10.450726    3262 out.go:239] * 
* 
W0806 00:30:10.457567    3262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0806 00:30:10.461661    3262 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-597000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr
E0806 00:30:48.414911    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:33:35.441008    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr: exit status 7 (3m45.078045375s)

                                                
                                                
-- stdout --
	ha-597000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:30:10.529504    3267 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:30:10.529682    3267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:30:10.529686    3267 out.go:304] Setting ErrFile to fd 2...
	I0806 00:30:10.529689    3267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:30:10.529836    3267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:30:10.529994    3267 out.go:298] Setting JSON to false
	I0806 00:30:10.530006    3267 mustload.go:65] Loading cluster: ha-597000
	I0806 00:30:10.530037    3267 notify.go:220] Checking for updates...
	I0806 00:30:10.530301    3267 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:30:10.530311    3267 status.go:255] checking status of ha-597000 ...
	I0806 00:30:10.531137    3267 status.go:330] ha-597000 host status = "Running" (err=<nil>)
	I0806 00:30:10.531148    3267 host.go:66] Checking if "ha-597000" exists ...
	I0806 00:30:10.531273    3267 host.go:66] Checking if "ha-597000" exists ...
	I0806 00:30:10.531420    3267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:30:10.531430    3267 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/id_rsa Username:docker}
	W0806 00:31:25.524943    3267 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0806 00:31:25.525264    3267 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0806 00:31:25.525314    3267 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0806 00:31:25.525331    3267 status.go:257] ha-597000 status: &{Name:ha-597000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 00:31:25.525376    3267 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0806 00:31:25.525396    3267 status.go:255] checking status of ha-597000-m02 ...
	I0806 00:31:25.526361    3267 status.go:330] ha-597000-m02 host status = "Stopped" (err=<nil>)
	I0806 00:31:25.526383    3267 status.go:343] host is not running, skipping remaining checks
	I0806 00:31:25.526395    3267 status.go:257] ha-597000-m02 status: &{Name:ha-597000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:31:25.526418    3267 status.go:255] checking status of ha-597000-m03 ...
	I0806 00:31:25.528924    3267 status.go:330] ha-597000-m03 host status = "Running" (err=<nil>)
	I0806 00:31:25.528947    3267 host.go:66] Checking if "ha-597000-m03" exists ...
	I0806 00:31:25.529473    3267 host.go:66] Checking if "ha-597000-m03" exists ...
	I0806 00:31:25.530065    3267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:31:25.530095    3267 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m03/id_rsa Username:docker}
	W0806 00:32:40.528313    3267 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0806 00:32:40.528517    3267 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0806 00:32:40.528555    3267 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0806 00:32:40.528574    3267 status.go:257] ha-597000-m03 status: &{Name:ha-597000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0806 00:32:40.528621    3267 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0806 00:32:40.528639    3267 status.go:255] checking status of ha-597000-m04 ...
	I0806 00:32:40.531581    3267 status.go:330] ha-597000-m04 host status = "Running" (err=<nil>)
	I0806 00:32:40.531612    3267 host.go:66] Checking if "ha-597000-m04" exists ...
	I0806 00:32:40.532217    3267 host.go:66] Checking if "ha-597000-m04" exists ...
	I0806 00:32:40.532765    3267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:32:40.532793    3267 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m04/id_rsa Username:docker}
	W0806 00:33:55.533991    3267 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0806 00:33:55.534054    3267 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0806 00:33:55.534062    3267 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0806 00:33:55.534066    3267 status.go:257] ha-597000-m04 status: &{Name:ha-597000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0806 00:33:55.534078    3267 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000: exit status 3 (1m15.044447375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:35:10.574235    3327 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0806 00:35:10.574273    3327 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-597000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-597000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-597000 -v=7 --alsologtostderr
E0806 00:38:35.436560    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:40:48.398495    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-597000 -v=7 --alsologtostderr: (5m27.196830625s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-597000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-597000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.229793417s)

                                                
                                                
-- stdout --
	* [ha-597000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-597000" primary control-plane node in "ha-597000" cluster
	* Restarting existing qemu2 VM for "ha-597000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-597000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:43:07.990005    3456 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:43:07.990169    3456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:07.990173    3456 out.go:304] Setting ErrFile to fd 2...
	I0806 00:43:07.990176    3456 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:07.990360    3456 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:43:07.991594    3456 out.go:298] Setting JSON to false
	I0806 00:43:08.011352    3456 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2555,"bootTime":1722927632,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:43:08.011423    3456 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:43:08.015936    3456 out.go:177] * [ha-597000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:43:08.022881    3456 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:43:08.022911    3456 notify.go:220] Checking for updates...
	I0806 00:43:08.029847    3456 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:43:08.032775    3456 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:43:08.035851    3456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:43:08.038816    3456 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:43:08.039976    3456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:43:08.043093    3456 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:43:08.043158    3456 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:43:08.047866    3456 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:43:08.052768    3456 start.go:297] selected driver: qemu2
	I0806 00:43:08.052775    3456 start.go:901] validating driver "qemu2" against &{Name:ha-597000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-597000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:43:08.052860    3456 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:43:08.055405    3456 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:43:08.055447    3456 cni.go:84] Creating CNI manager for ""
	I0806 00:43:08.055451    3456 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0806 00:43:08.055500    3456 start.go:340] cluster config:
	{Name:ha-597000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-597000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:43:08.059598    3456 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:43:08.067767    3456 out.go:177] * Starting "ha-597000" primary control-plane node in "ha-597000" cluster
	I0806 00:43:08.071838    3456 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:43:08.071854    3456 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:43:08.071864    3456 cache.go:56] Caching tarball of preloaded images
	I0806 00:43:08.071922    3456 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:43:08.071928    3456 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:43:08.072004    3456 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/ha-597000/config.json ...
	I0806 00:43:08.072469    3456 start.go:360] acquireMachinesLock for ha-597000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:43:08.072506    3456 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "ha-597000"
	I0806 00:43:08.072515    3456 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:43:08.072523    3456 fix.go:54] fixHost starting: 
	I0806 00:43:08.072645    3456 fix.go:112] recreateIfNeeded on ha-597000: state=Stopped err=<nil>
	W0806 00:43:08.072654    3456 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:43:08.076858    3456 out.go:177] * Restarting existing qemu2 VM for "ha-597000" ...
	I0806 00:43:08.084855    3456 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:43:08.084889    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:5c:51:f4:45:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/disk.qcow2
	I0806 00:43:08.086982    3456 main.go:141] libmachine: STDOUT: 
	I0806 00:43:08.087002    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:43:08.087031    3456 fix.go:56] duration metric: took 14.511084ms for fixHost
	I0806 00:43:08.087035    3456 start.go:83] releasing machines lock for "ha-597000", held for 14.524125ms
	W0806 00:43:08.087048    3456 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:43:08.087080    3456 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:43:08.087085    3456 start.go:729] Will try again in 5 seconds ...
	I0806 00:43:13.089275    3456 start.go:360] acquireMachinesLock for ha-597000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:43:13.089813    3456 start.go:364] duration metric: took 426µs to acquireMachinesLock for "ha-597000"
	I0806 00:43:13.089940    3456 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:43:13.089964    3456 fix.go:54] fixHost starting: 
	I0806 00:43:13.090735    3456 fix.go:112] recreateIfNeeded on ha-597000: state=Stopped err=<nil>
	W0806 00:43:13.090763    3456 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:43:13.100220    3456 out.go:177] * Restarting existing qemu2 VM for "ha-597000" ...
	I0806 00:43:13.105261    3456 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:43:13.105621    3456 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:5c:51:f4:45:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000/disk.qcow2
	I0806 00:43:13.115317    3456 main.go:141] libmachine: STDOUT: 
	I0806 00:43:13.115376    3456 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:43:13.115463    3456 fix.go:56] duration metric: took 25.503416ms for fixHost
	I0806 00:43:13.115479    3456 start.go:83] releasing machines lock for "ha-597000", held for 25.640625ms
	W0806 00:43:13.115659    3456 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-597000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-597000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:43:13.124312    3456 out.go:177] 
	W0806 00:43:13.128311    3456 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:43:13.128528    3456 out.go:239] * 
	* 
	W0806 00:43:13.131178    3456 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:43:13.146251    3456 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-597000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-597000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000: exit status 7 (33.661125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-597000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-597000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.504167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-597000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-597000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:43:13.288375    3473 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:43:13.288777    3473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:13.288780    3473 out.go:304] Setting ErrFile to fd 2...
	I0806 00:43:13.288783    3473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:13.288908    3473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:43:13.289113    3473 mustload.go:65] Loading cluster: ha-597000
	I0806 00:43:13.289338    3473 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0806 00:43:13.289634    3473 out.go:239] ! The control-plane node ha-597000 host is not running (will try others): state=Stopped
	! The control-plane node ha-597000 host is not running (will try others): state=Stopped
	W0806 00:43:13.289742    3473 out.go:239] ! The control-plane node ha-597000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-597000-m02 host is not running (will try others): state=Stopped
	I0806 00:43:13.293717    3473 out.go:177] * The control-plane node ha-597000-m03 host is not running: state=Stopped
	I0806 00:43:13.296627    3473 out.go:177]   To start a cluster, run: "minikube start -p ha-597000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-597000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr: exit status 7 (29.9385ms)

                                                
                                                
-- stdout --
	ha-597000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:43:13.328237    3475 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:43:13.328383    3475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:13.328387    3475 out.go:304] Setting ErrFile to fd 2...
	I0806 00:43:13.328389    3475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:13.328538    3475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:43:13.328651    3475 out.go:298] Setting JSON to false
	I0806 00:43:13.328661    3475 mustload.go:65] Loading cluster: ha-597000
	I0806 00:43:13.328721    3475 notify.go:220] Checking for updates...
	I0806 00:43:13.328903    3475 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:43:13.328909    3475 status.go:255] checking status of ha-597000 ...
	I0806 00:43:13.329132    3475 status.go:330] ha-597000 host status = "Stopped" (err=<nil>)
	I0806 00:43:13.329135    3475 status.go:343] host is not running, skipping remaining checks
	I0806 00:43:13.329137    3475 status.go:257] ha-597000 status: &{Name:ha-597000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:43:13.329147    3475 status.go:255] checking status of ha-597000-m02 ...
	I0806 00:43:13.329246    3475 status.go:330] ha-597000-m02 host status = "Stopped" (err=<nil>)
	I0806 00:43:13.329249    3475 status.go:343] host is not running, skipping remaining checks
	I0806 00:43:13.329251    3475 status.go:257] ha-597000-m02 status: &{Name:ha-597000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:43:13.329255    3475 status.go:255] checking status of ha-597000-m03 ...
	I0806 00:43:13.329339    3475 status.go:330] ha-597000-m03 host status = "Stopped" (err=<nil>)
	I0806 00:43:13.329341    3475 status.go:343] host is not running, skipping remaining checks
	I0806 00:43:13.329343    3475 status.go:257] ha-597000-m03 status: &{Name:ha-597000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:43:13.329347    3475 status.go:255] checking status of ha-597000-m04 ...
	I0806 00:43:13.329442    3475 status.go:330] ha-597000-m04 host status = "Stopped" (err=<nil>)
	I0806 00:43:13.329445    3475 status.go:343] host is not running, skipping remaining checks
	I0806 00:43:13.329447    3475 status.go:257] ha-597000-m04 status: &{Name:ha-597000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000: exit status 7 (29.357583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-597000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-597000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-597000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-597000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-597000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000: exit status 7 (28.538208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-597000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (208.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 stop -v=7 --alsologtostderr
E0806 00:43:35.433646    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:45:48.394824    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:46:38.527904    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-597000 stop -v=7 --alsologtostderr: signal: killed (3m28.627759959s)

                                                
                                                
-- stdout --
	* Stopping node "ha-597000-m04"  ...
	* Stopping node "ha-597000-m03"  ...
	* Stopping node "ha-597000-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:43:13.463120    3484 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:43:13.463257    3484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:13.463260    3484 out.go:304] Setting ErrFile to fd 2...
	I0806 00:43:13.463262    3484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:43:13.463399    3484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:43:13.463633    3484 out.go:298] Setting JSON to false
	I0806 00:43:13.463743    3484 mustload.go:65] Loading cluster: ha-597000
	I0806 00:43:13.463970    3484 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:43:13.464039    3484 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/ha-597000/config.json ...
	I0806 00:43:13.464289    3484 mustload.go:65] Loading cluster: ha-597000
	I0806 00:43:13.464370    3484 config.go:182] Loaded profile config "ha-597000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:43:13.464387    3484 stop.go:39] StopHost: ha-597000-m04
	I0806 00:43:13.468579    3484 out.go:177] * Stopping node "ha-597000-m04"  ...
	I0806 00:43:13.475537    3484 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 00:43:13.475572    3484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 00:43:13.475589    3484 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m04/id_rsa Username:docker}
	W0806 00:44:28.477648    3484 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0806 00:44:28.478006    3484 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0806 00:44:28.478139    3484 main.go:141] libmachine: Stopping "ha-597000-m04"...
	I0806 00:44:28.478325    3484 stop.go:66] stop err: Machine "ha-597000-m04" is already stopped.
	I0806 00:44:28.478353    3484 stop.go:69] host is already stopped
	I0806 00:44:28.478380    3484 stop.go:39] StopHost: ha-597000-m03
	I0806 00:44:28.483953    3484 out.go:177] * Stopping node "ha-597000-m03"  ...
	I0806 00:44:28.490883    3484 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 00:44:28.491046    3484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 00:44:28.491075    3484 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m03/id_rsa Username:docker}
	W0806 00:45:43.492906    3484 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0806 00:45:43.493120    3484 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0806 00:45:43.493275    3484 main.go:141] libmachine: Stopping "ha-597000-m03"...
	I0806 00:45:43.493422    3484 stop.go:66] stop err: Machine "ha-597000-m03" is already stopped.
	I0806 00:45:43.493451    3484 stop.go:69] host is already stopped
	I0806 00:45:43.493478    3484 stop.go:39] StopHost: ha-597000-m02
	I0806 00:45:43.503825    3484 out.go:177] * Stopping node "ha-597000-m02"  ...
	I0806 00:45:43.507844    3484 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0806 00:45:43.507980    3484 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0806 00:45:43.508013    3484 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/ha-597000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-597000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr: context deadline exceeded (2.584µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-597000 -n ha-597000: exit status 7 (71.964167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-597000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (208.70s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-575000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-575000 --driver=qemu2 : exit status 80 (10.164313292s)

                                                
                                                
-- stdout --
	* [image-575000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-575000" primary control-plane node in "image-575000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-575000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-575000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-575000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-575000 -n image-575000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-575000 -n image-575000: exit status 7 (67.697958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-575000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-537000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-537000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.746517417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2bdc0b50-abd0-47c0-9f13-8908473284cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-537000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3db767c8-d0aa-425b-8733-f5456c4cda91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"0a0e087f-e474-46b5-824e-c3ffed7b19d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig"}}
	{"specversion":"1.0","id":"c5f78e52-a021-46b5-944d-8e4eacfde406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ca2c53d6-9bd0-4541-a993-a69f87531fed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55507115-f7d5-4ad3-82cf-4bad785b0b89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube"}}
	{"specversion":"1.0","id":"300f912e-7920-49df-9b2f-6fed1f352178","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"147464ca-7e4c-4491-8193-099fcac3b9b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c59ea43-f75d-4086-bdab-10b554174c2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"57af8b4a-cc2b-4fa1-bd44-e2d0e3ff030a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-537000\" primary control-plane node in \"json-output-537000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ceefd56-e2fb-445a-9a12-c693bace0d6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e7565c99-5723-478b-a313-dfb368d29c84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-537000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d80e137-551a-4ee8-a5eb-0996b1e9dadd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"46470b60-36af-46f4-9add-35b7850de258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"1e9e7c3e-f93c-4ea1-8284-94a6f68a5b55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-537000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"398bd6d8-5a0d-49c1-a966-725606afde11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bd148be2-5658-4b7a-91f0-27eacddf7a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-537000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-537000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-537000 --output=json --user=testUser: exit status 83 (74.040542ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"576dde5f-82d3-4960-8721-98d3392c1b0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-537000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"91b97083-eb13-4e6d-b55e-ce5dd06d5ca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-537000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-537000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.07s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-537000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-537000 --output=json --user=testUser: exit status 83 (44.294917ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-537000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-537000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-537000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-537000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-758000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-758000 --driver=qemu2 : exit status 80 (9.80834025s)

                                                
                                                
-- stdout --
	* [first-758000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-758000" primary control-plane node in "first-758000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-758000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-758000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-758000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-06 00:47:14.852004 -0700 PDT m=+2582.152146543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-760000 -n second-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-760000 -n second-760000: exit status 85 (77.597625ms)

                                                
                                                
-- stdout --
	* Profile "second-760000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-760000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-760000" host is not running, skipping log retrieval (state="* Profile \"second-760000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-760000\"")
helpers_test.go:175: Cleaning up "second-760000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-760000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-06 00:47:15.040595 -0700 PDT m=+2582.340734793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-758000 -n first-758000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-758000 -n first-758000: exit status 7 (29.216625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-758000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-758000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-758000
--- FAIL: TestMinikubeProfile (10.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-775000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-775000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.964609958s)

                                                
                                                
-- stdout --
	* [mount-start-1-775000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-775000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-775000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-775000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-775000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-775000 -n mount-start-1-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-775000 -n mount-start-1-775000: exit status 7 (66.36725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-775000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.817013625s)

                                                
                                                
-- stdout --
	* [multinode-508000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-508000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:47:25.390267    3679 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:47:25.390402    3679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:47:25.390405    3679 out.go:304] Setting ErrFile to fd 2...
	I0806 00:47:25.390408    3679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:47:25.390535    3679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:47:25.391613    3679 out.go:298] Setting JSON to false
	I0806 00:47:25.407787    3679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2813,"bootTime":1722927632,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:47:25.407862    3679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:47:25.416623    3679 out.go:177] * [multinode-508000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:47:25.423647    3679 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:47:25.423697    3679 notify.go:220] Checking for updates...
	I0806 00:47:25.430636    3679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:47:25.433701    3679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:47:25.436632    3679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:47:25.439738    3679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:47:25.442637    3679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:47:25.445741    3679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:47:25.449663    3679 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 00:47:25.456607    3679 start.go:297] selected driver: qemu2
	I0806 00:47:25.456612    3679 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:47:25.456618    3679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:47:25.458978    3679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:47:25.462708    3679 out.go:177] * Automatically selected the socket_vmnet network
	I0806 00:47:25.465669    3679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:47:25.465683    3679 cni.go:84] Creating CNI manager for ""
	I0806 00:47:25.465688    3679 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:47:25.465691    3679 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:47:25.465723    3679 start.go:340] cluster config:
	{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:47:25.469494    3679 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:47:25.477684    3679 out.go:177] * Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	I0806 00:47:25.481609    3679 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:47:25.481625    3679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:47:25.481634    3679 cache.go:56] Caching tarball of preloaded images
	I0806 00:47:25.481695    3679 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:47:25.481701    3679 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:47:25.481921    3679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/multinode-508000/config.json ...
	I0806 00:47:25.481937    3679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/multinode-508000/config.json: {Name:mkaca8017f31bccffe34f40ba728a29650b70d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:47:25.482338    3679 start.go:360] acquireMachinesLock for multinode-508000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:47:25.482380    3679 start.go:364] duration metric: took 33.792µs to acquireMachinesLock for "multinode-508000"
	I0806 00:47:25.482393    3679 start.go:93] Provisioning new machine with config: &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:47:25.482427    3679 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:47:25.491563    3679 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:47:25.510128    3679 start.go:159] libmachine.API.Create for "multinode-508000" (driver="qemu2")
	I0806 00:47:25.510162    3679 client.go:168] LocalClient.Create starting
	I0806 00:47:25.510221    3679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:47:25.510250    3679 main.go:141] libmachine: Decoding PEM data...
	I0806 00:47:25.510266    3679 main.go:141] libmachine: Parsing certificate...
	I0806 00:47:25.510309    3679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:47:25.510333    3679 main.go:141] libmachine: Decoding PEM data...
	I0806 00:47:25.510347    3679 main.go:141] libmachine: Parsing certificate...
	I0806 00:47:25.510757    3679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:47:25.655281    3679 main.go:141] libmachine: Creating SSH key...
	I0806 00:47:25.716875    3679 main.go:141] libmachine: Creating Disk image...
	I0806 00:47:25.716880    3679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:47:25.717057    3679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:47:25.726053    3679 main.go:141] libmachine: STDOUT: 
	I0806 00:47:25.726066    3679 main.go:141] libmachine: STDERR: 
	I0806 00:47:25.726109    3679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2 +20000M
	I0806 00:47:25.733819    3679 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:47:25.733831    3679 main.go:141] libmachine: STDERR: 
	I0806 00:47:25.733847    3679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:47:25.733861    3679 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:47:25.733872    3679 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:47:25.733896    3679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:be:d0:53:7b:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:47:25.735471    3679 main.go:141] libmachine: STDOUT: 
	I0806 00:47:25.735483    3679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:47:25.735499    3679 client.go:171] duration metric: took 225.330542ms to LocalClient.Create
	I0806 00:47:27.737680    3679 start.go:128] duration metric: took 2.255222791s to createHost
	I0806 00:47:27.737838    3679 start.go:83] releasing machines lock for "multinode-508000", held for 2.255351708s
	W0806 00:47:27.737895    3679 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:47:27.749059    3679 out.go:177] * Deleting "multinode-508000" in qemu2 ...
	W0806 00:47:27.776636    3679 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:47:27.776661    3679 start.go:729] Will try again in 5 seconds ...
	I0806 00:47:32.778921    3679 start.go:360] acquireMachinesLock for multinode-508000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:47:32.779450    3679 start.go:364] duration metric: took 364.958µs to acquireMachinesLock for "multinode-508000"
	I0806 00:47:32.779568    3679 start.go:93] Provisioning new machine with config: &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:47:32.779760    3679 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:47:32.791436    3679 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:47:32.841145    3679 start.go:159] libmachine.API.Create for "multinode-508000" (driver="qemu2")
	I0806 00:47:32.841193    3679 client.go:168] LocalClient.Create starting
	I0806 00:47:32.841301    3679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:47:32.841357    3679 main.go:141] libmachine: Decoding PEM data...
	I0806 00:47:32.841374    3679 main.go:141] libmachine: Parsing certificate...
	I0806 00:47:32.841430    3679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:47:32.841482    3679 main.go:141] libmachine: Decoding PEM data...
	I0806 00:47:32.841495    3679 main.go:141] libmachine: Parsing certificate...
	I0806 00:47:32.841993    3679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:47:32.998992    3679 main.go:141] libmachine: Creating SSH key...
	I0806 00:47:33.112619    3679 main.go:141] libmachine: Creating Disk image...
	I0806 00:47:33.112625    3679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:47:33.112821    3679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:47:33.122262    3679 main.go:141] libmachine: STDOUT: 
	I0806 00:47:33.122282    3679 main.go:141] libmachine: STDERR: 
	I0806 00:47:33.122341    3679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2 +20000M
	I0806 00:47:33.130349    3679 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:47:33.130362    3679 main.go:141] libmachine: STDERR: 
	I0806 00:47:33.130380    3679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:47:33.130390    3679 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:47:33.130400    3679 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:47:33.130424    3679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c6:fd:74:e5:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:47:33.132121    3679 main.go:141] libmachine: STDOUT: 
	I0806 00:47:33.132138    3679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:47:33.132150    3679 client.go:171] duration metric: took 290.951875ms to LocalClient.Create
	I0806 00:47:35.134320    3679 start.go:128] duration metric: took 2.354524s to createHost
	I0806 00:47:35.134398    3679 start.go:83] releasing machines lock for "multinode-508000", held for 2.354922125s
	W0806 00:47:35.134771    3679 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:47:35.149071    3679 out.go:177] 
	W0806 00:47:35.154285    3679 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:47:35.154351    3679 out.go:239] * 
	* 
	W0806 00:47:35.157200    3679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:47:35.168061    3679 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-508000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (68.949209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (106.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.860333ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-508000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- rollout status deployment/busybox: exit status 1 (58.074375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.631458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.347375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.89975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.69ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.576041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.094958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.168625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.372041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.318333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0806 00:48:35.463027    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.217084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.262875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.624708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.275083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.860458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.4725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.667875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (106.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-508000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.996167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.984333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-508000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-508000 -v 3 --alsologtostderr: exit status 83 (40.3585ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-508000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:22.034064    3779 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:22.034221    3779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.034224    3779 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:22.034232    3779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.034337    3779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:22.034573    3779 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:22.034777    3779 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:22.039193    3779 out.go:177] * The control-plane node multinode-508000 host is not running: state=Stopped
	I0806 00:49:22.042087    3779 out.go:177]   To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-508000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.187541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-508000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-508000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (30.775792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-508000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-508000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-508000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.699792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-508000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-508000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-508000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-508000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --output json --alsologtostderr: exit status 7 (28.776708ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-508000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:22.239033    3791 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:22.239192    3791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.239196    3791 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:22.239198    3791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.239322    3791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:22.239425    3791 out.go:298] Setting JSON to true
	I0806 00:49:22.239435    3791 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:22.239503    3791 notify.go:220] Checking for updates...
	I0806 00:49:22.239637    3791 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:22.239643    3791 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:22.239859    3791 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:22.239863    3791 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:22.239865    3791 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-508000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.060875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 node stop m03: exit status 85 (44.693042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-508000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status: exit status 7 (29.748583ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr: exit status 7 (28.864625ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:22.372105    3799 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:22.372258    3799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.372264    3799 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:22.372266    3799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.372397    3799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:22.372526    3799 out.go:298] Setting JSON to false
	I0806 00:49:22.372537    3799 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:22.372576    3799 notify.go:220] Checking for updates...
	I0806 00:49:22.372769    3799 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:22.372775    3799 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:22.372985    3799 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:22.372988    3799 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:22.372991    3799 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr": multinode-508000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.681083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.0415ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:22.431640    3803 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:22.431894    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.431900    3803 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:22.431902    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.432044    3803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:22.432273    3803 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:22.432475    3803 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:22.437164    3803 out.go:177] 
	W0806 00:49:22.440239    3803 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0806 00:49:22.440244    3803 out.go:239] * 
	* 
	W0806 00:49:22.441881    3803 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:49:22.445106    3803 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0806 00:49:22.431640    3803 out.go:291] Setting OutFile to fd 1 ...
I0806 00:49:22.431894    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:49:22.431900    3803 out.go:304] Setting ErrFile to fd 2...
I0806 00:49:22.431902    3803 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:49:22.432044    3803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
I0806 00:49:22.432273    3803 mustload.go:65] Loading cluster: multinode-508000
I0806 00:49:22.432475    3803 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:49:22.437164    3803 out.go:177] 
W0806 00:49:22.440239    3803 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0806 00:49:22.440244    3803 out.go:239] * 
* 
W0806 00:49:22.441881    3803 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0806 00:49:22.445106    3803 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-508000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (29.854375ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:22.477278    3805 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:22.477435    3805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.477438    3805 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:22.477440    3805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:22.477565    3805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:22.477676    3805 out.go:298] Setting JSON to false
	I0806 00:49:22.477691    3805 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:22.477747    3805 notify.go:220] Checking for updates...
	I0806 00:49:22.477882    3805 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:22.477892    3805 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:22.478111    3805 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:22.478115    3805 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:22.478118    3805 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (72.265333ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:23.165267    3807 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:23.165472    3807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:23.165477    3807 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:23.165480    3807 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:23.165658    3807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:23.165804    3807 out.go:298] Setting JSON to false
	I0806 00:49:23.165817    3807 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:23.165847    3807 notify.go:220] Checking for updates...
	I0806 00:49:23.166099    3807 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:23.166109    3807 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:23.166383    3807 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:23.166388    3807 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:23.166391    3807 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (74.175917ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:25.363413    3809 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:25.363607    3809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:25.363611    3809 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:25.363614    3809 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:25.363798    3809 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:25.363969    3809 out.go:298] Setting JSON to false
	I0806 00:49:25.363990    3809 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:25.364030    3809 notify.go:220] Checking for updates...
	I0806 00:49:25.364263    3809 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:25.364270    3809 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:25.364569    3809 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:25.364574    3809 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:25.364577    3809 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (73.642875ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:26.719140    3811 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:26.719388    3811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:26.719393    3811 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:26.719396    3811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:26.719609    3811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:26.719784    3811 out.go:298] Setting JSON to false
	I0806 00:49:26.719798    3811 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:26.719844    3811 notify.go:220] Checking for updates...
	I0806 00:49:26.720067    3811 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:26.720075    3811 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:26.720394    3811 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:26.720400    3811 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:26.720403    3811 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (72.218375ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:29.389843    3813 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:29.390087    3813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:29.390092    3813 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:29.390096    3813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:29.390290    3813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:29.390453    3813 out.go:298] Setting JSON to false
	I0806 00:49:29.390467    3813 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:29.390516    3813 notify.go:220] Checking for updates...
	I0806 00:49:29.390718    3813 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:29.390726    3813 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:29.391031    3813 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:29.391037    3813 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:29.391040    3813 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (71.8585ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:33.544995    3817 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:33.545216    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:33.545221    3817 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:33.545225    3817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:33.545420    3817 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:33.545567    3817 out.go:298] Setting JSON to false
	I0806 00:49:33.545579    3817 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:33.545624    3817 notify.go:220] Checking for updates...
	I0806 00:49:33.545832    3817 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:33.545840    3817 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:33.546101    3817 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:33.546106    3817 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:33.546109    3817 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (71.894916ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:39.510121    3821 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:39.510302    3821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:39.510307    3821 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:39.510311    3821 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:39.510517    3821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:39.510658    3821 out.go:298] Setting JSON to false
	I0806 00:49:39.510671    3821 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:39.510714    3821 notify.go:220] Checking for updates...
	I0806 00:49:39.510948    3821 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:39.510955    3821 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:39.511232    3821 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:39.511236    3821 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:39.511239    3821 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (72.666209ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:49:55.303393    3830 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:49:55.303609    3830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:55.303613    3830 out.go:304] Setting ErrFile to fd 2...
	I0806 00:49:55.303617    3830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:49:55.303816    3830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:49:55.303980    3830 out.go:298] Setting JSON to false
	I0806 00:49:55.303993    3830 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:49:55.304039    3830 notify.go:220] Checking for updates...
	I0806 00:49:55.304277    3830 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:49:55.304286    3830 status.go:255] checking status of multinode-508000 ...
	I0806 00:49:55.304589    3830 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:49:55.304594    3830 status.go:343] host is not running, skipping remaining checks
	I0806 00:49:55.304597    3830 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr: exit status 7 (72.227167ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:50:11.947320    3832 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:50:11.947507    3832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:11.947512    3832 out.go:304] Setting ErrFile to fd 2...
	I0806 00:50:11.947515    3832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:11.947692    3832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:50:11.947842    3832 out.go:298] Setting JSON to false
	I0806 00:50:11.947854    3832 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:50:11.947902    3832 notify.go:220] Checking for updates...
	I0806 00:50:11.948126    3832 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:50:11.948137    3832 status.go:255] checking status of multinode-508000 ...
	I0806 00:50:11.948426    3832 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:50:11.948431    3832 status.go:343] host is not running, skipping remaining checks
	I0806 00:50:11.948434    3832 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-508000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (32.965333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-508000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-508000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-508000: (3.061995666s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.215646291s)

                                                
                                                
-- stdout --
	* [multinode-508000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:50:15.133160    3856 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:50:15.133307    3856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:15.133311    3856 out.go:304] Setting ErrFile to fd 2...
	I0806 00:50:15.133314    3856 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:15.133478    3856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:50:15.134637    3856 out.go:298] Setting JSON to false
	I0806 00:50:15.153653    3856 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2983,"bootTime":1722927632,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:50:15.153718    3856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:50:15.158611    3856 out.go:177] * [multinode-508000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:50:15.165607    3856 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:50:15.165646    3856 notify.go:220] Checking for updates...
	I0806 00:50:15.172626    3856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:50:15.175627    3856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:50:15.178568    3856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:50:15.181587    3856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:50:15.184681    3856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:50:15.187915    3856 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:50:15.187969    3856 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:50:15.192531    3856 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:50:15.199511    3856 start.go:297] selected driver: qemu2
	I0806 00:50:15.199518    3856 start.go:901] validating driver "qemu2" against &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:50:15.199569    3856 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:50:15.202076    3856 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:50:15.202142    3856 cni.go:84] Creating CNI manager for ""
	I0806 00:50:15.202154    3856 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:50:15.202211    3856 start.go:340] cluster config:
	{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-508000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:50:15.206072    3856 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:15.213406    3856 out.go:177] * Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	I0806 00:50:15.217624    3856 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:50:15.217644    3856 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:50:15.217652    3856 cache.go:56] Caching tarball of preloaded images
	I0806 00:50:15.217712    3856 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:50:15.217720    3856 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:50:15.217783    3856 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/multinode-508000/config.json ...
	I0806 00:50:15.218269    3856 start.go:360] acquireMachinesLock for multinode-508000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:50:15.218307    3856 start.go:364] duration metric: took 31.417µs to acquireMachinesLock for "multinode-508000"
	I0806 00:50:15.218316    3856 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:50:15.218324    3856 fix.go:54] fixHost starting: 
	I0806 00:50:15.218453    3856 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W0806 00:50:15.218463    3856 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:50:15.226572    3856 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I0806 00:50:15.230605    3856 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:50:15.230653    3856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c6:fd:74:e5:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:50:15.232930    3856 main.go:141] libmachine: STDOUT: 
	I0806 00:50:15.232951    3856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:50:15.232983    3856 fix.go:56] duration metric: took 14.66025ms for fixHost
	I0806 00:50:15.232987    3856 start.go:83] releasing machines lock for "multinode-508000", held for 14.675417ms
	W0806 00:50:15.232994    3856 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:50:15.233041    3856 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:50:15.233046    3856 start.go:729] Will try again in 5 seconds ...
	I0806 00:50:20.235249    3856 start.go:360] acquireMachinesLock for multinode-508000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:50:20.235773    3856 start.go:364] duration metric: took 390.75µs to acquireMachinesLock for "multinode-508000"
	I0806 00:50:20.235934    3856 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:50:20.235958    3856 fix.go:54] fixHost starting: 
	I0806 00:50:20.236718    3856 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W0806 00:50:20.236745    3856 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:50:20.240741    3856 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I0806 00:50:20.246718    3856 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:50:20.246941    3856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c6:fd:74:e5:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:50:20.256212    3856 main.go:141] libmachine: STDOUT: 
	I0806 00:50:20.256274    3856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:50:20.256371    3856 fix.go:56] duration metric: took 20.415417ms for fixHost
	I0806 00:50:20.256388    3856 start.go:83] releasing machines lock for "multinode-508000", held for 20.556791ms
	W0806 00:50:20.256572    3856 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:50:20.263729    3856 out.go:177] 
	W0806 00:50:20.267799    3856 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:50:20.267822    3856 out.go:239] * 
	* 
	W0806 00:50:20.270142    3856 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:50:20.277761    3856 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-508000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-508000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (33.254166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 node delete m03: exit status 83 (39.227667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-508000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-508000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr: exit status 7 (28.733292ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:50:20.460610    3877 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:50:20.460746    3877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:20.460750    3877 out.go:304] Setting ErrFile to fd 2...
	I0806 00:50:20.460753    3877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:20.460886    3877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:50:20.461001    3877 out.go:298] Setting JSON to false
	I0806 00:50:20.461010    3877 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:50:20.461075    3877 notify.go:220] Checking for updates...
	I0806 00:50:20.461226    3877 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:50:20.461236    3877 status.go:255] checking status of multinode-508000 ...
	I0806 00:50:20.461437    3877 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:50:20.461441    3877 status.go:343] host is not running, skipping remaining checks
	I0806 00:50:20.461443    3877 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.478084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-508000 stop: (3.856346334s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status: exit status 7 (63.030459ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr: exit status 7 (30.968958ms)

                                                
                                                
-- stdout --
	multinode-508000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:50:24.441084    3903 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:50:24.441228    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:24.441231    3903 out.go:304] Setting ErrFile to fd 2...
	I0806 00:50:24.441234    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:24.441363    3903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:50:24.441484    3903 out.go:298] Setting JSON to false
	I0806 00:50:24.441496    3903 mustload.go:65] Loading cluster: multinode-508000
	I0806 00:50:24.441560    3903 notify.go:220] Checking for updates...
	I0806 00:50:24.441699    3903 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:50:24.441704    3903 status.go:255] checking status of multinode-508000 ...
	I0806 00:50:24.441893    3903 status.go:330] multinode-508000 host status = "Stopped" (err=<nil>)
	I0806 00:50:24.441897    3903 status.go:343] host is not running, skipping remaining checks
	I0806 00:50:24.441899    3903 status.go:257] multinode-508000 status: &{Name:multinode-508000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr": multinode-508000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-508000 status --alsologtostderr": multinode-508000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (28.786417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181991042s)

                                                
                                                
-- stdout --
	* [multinode-508000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-508000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:50:24.498893    3907 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:50:24.499027    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:24.499030    3907 out.go:304] Setting ErrFile to fd 2...
	I0806 00:50:24.499033    3907 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:24.499159    3907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:50:24.500190    3907 out.go:298] Setting JSON to false
	I0806 00:50:24.516062    3907 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2992,"bootTime":1722927632,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:50:24.516135    3907 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:50:24.521570    3907 out.go:177] * [multinode-508000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:50:24.528523    3907 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:50:24.528564    3907 notify.go:220] Checking for updates...
	I0806 00:50:24.535547    3907 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:50:24.538532    3907 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:50:24.541495    3907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:50:24.544565    3907 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:50:24.547470    3907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:50:24.550747    3907 config.go:182] Loaded profile config "multinode-508000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:50:24.551044    3907 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:50:24.555503    3907 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:50:24.562456    3907 start.go:297] selected driver: qemu2
	I0806 00:50:24.562463    3907 start.go:901] validating driver "qemu2" against &{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-508000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:50:24.562511    3907 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:50:24.564896    3907 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:50:24.564936    3907 cni.go:84] Creating CNI manager for ""
	I0806 00:50:24.564941    3907 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:50:24.564981    3907 start.go:340] cluster config:
	{Name:multinode-508000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-508000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:50:24.568566    3907 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:24.574473    3907 out.go:177] * Starting "multinode-508000" primary control-plane node in "multinode-508000" cluster
	I0806 00:50:24.578525    3907 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:50:24.578543    3907 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:50:24.578549    3907 cache.go:56] Caching tarball of preloaded images
	I0806 00:50:24.578620    3907 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:50:24.578626    3907 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:50:24.578693    3907 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/multinode-508000/config.json ...
	I0806 00:50:24.579161    3907 start.go:360] acquireMachinesLock for multinode-508000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:50:24.579192    3907 start.go:364] duration metric: took 24.541µs to acquireMachinesLock for "multinode-508000"
	I0806 00:50:24.579201    3907 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:50:24.579210    3907 fix.go:54] fixHost starting: 
	I0806 00:50:24.579326    3907 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W0806 00:50:24.579335    3907 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:50:24.586523    3907 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I0806 00:50:24.590361    3907 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:50:24.590411    3907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c6:fd:74:e5:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:50:24.592544    3907 main.go:141] libmachine: STDOUT: 
	I0806 00:50:24.592565    3907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:50:24.592594    3907 fix.go:56] duration metric: took 13.386125ms for fixHost
	I0806 00:50:24.592606    3907 start.go:83] releasing machines lock for "multinode-508000", held for 13.403208ms
	W0806 00:50:24.592612    3907 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:50:24.592648    3907 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:50:24.592653    3907 start.go:729] Will try again in 5 seconds ...
	I0806 00:50:29.594890    3907 start.go:360] acquireMachinesLock for multinode-508000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:50:29.595328    3907 start.go:364] duration metric: took 319.459µs to acquireMachinesLock for "multinode-508000"
	I0806 00:50:29.595413    3907 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:50:29.595428    3907 fix.go:54] fixHost starting: 
	I0806 00:50:29.595920    3907 fix.go:112] recreateIfNeeded on multinode-508000: state=Stopped err=<nil>
	W0806 00:50:29.595944    3907 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:50:29.603353    3907 out.go:177] * Restarting existing qemu2 VM for "multinode-508000" ...
	I0806 00:50:29.608416    3907 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:50:29.608578    3907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c6:fd:74:e5:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/multinode-508000/disk.qcow2
	I0806 00:50:29.614991    3907 main.go:141] libmachine: STDOUT: 
	I0806 00:50:29.615044    3907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:50:29.615116    3907 fix.go:56] duration metric: took 19.688792ms for fixHost
	I0806 00:50:29.615132    3907 start.go:83] releasing machines lock for "multinode-508000", held for 19.787458ms
	W0806 00:50:29.615322    3907 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:50:29.622549    3907 out.go:177] 
	W0806 00:50:29.628630    3907 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:50:29.628665    3907 out.go:239] * 
	* 
	W0806 00:50:29.631605    3907 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:50:29.640396    3907 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-508000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (66.646916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-508000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000-m01 --driver=qemu2 : exit status 80 (9.9634115s)

                                                
                                                
-- stdout --
	* [multinode-508000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-508000-m01" primary control-plane node in "multinode-508000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-508000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-508000-m02 --driver=qemu2 
E0806 00:50:48.425197    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-508000-m02 --driver=qemu2 : exit status 80 (9.912258875s)

                                                
                                                
-- stdout --
	* [multinode-508000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-508000-m02" primary control-plane node in "multinode-508000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-508000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-508000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-508000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-508000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-508000: exit status 83 (79.634167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-508000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-508000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-508000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-508000 -n multinode-508000: exit status 7 (29.266542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-508000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.10s)

                                                
                                    
x
+
TestPreload (10.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-300000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-300000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.999285875s)

                                                
                                                
-- stdout --
	* [test-preload-300000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-300000" primary control-plane node in "test-preload-300000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-300000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:50:49.959078    3968 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:50:49.959196    3968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:49.959200    3968 out.go:304] Setting ErrFile to fd 2...
	I0806 00:50:49.959203    3968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:50:49.959350    3968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:50:49.960401    3968 out.go:298] Setting JSON to false
	I0806 00:50:49.976437    3968 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3017,"bootTime":1722927632,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:50:49.976503    3968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:50:49.982297    3968 out.go:177] * [test-preload-300000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:50:49.990209    3968 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:50:49.990272    3968 notify.go:220] Checking for updates...
	I0806 00:50:49.998221    3968 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:50:50.001228    3968 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:50:50.004285    3968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:50:50.007310    3968 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:50:50.010259    3968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:50:50.013565    3968 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:50:50.013619    3968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:50:50.018290    3968 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 00:50:50.025212    3968 start.go:297] selected driver: qemu2
	I0806 00:50:50.025218    3968 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:50:50.025224    3968 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:50:50.027679    3968 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:50:50.031244    3968 out.go:177] * Automatically selected the socket_vmnet network
	I0806 00:50:50.034192    3968 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:50:50.034208    3968 cni.go:84] Creating CNI manager for ""
	I0806 00:50:50.034215    3968 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:50:50.034219    3968 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:50:50.034244    3968 start.go:340] cluster config:
	{Name:test-preload-300000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-300000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:50:50.037928    3968 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.045295    3968 out.go:177] * Starting "test-preload-300000" primary control-plane node in "test-preload-300000" cluster
	I0806 00:50:50.053219    3968 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0806 00:50:50.053308    3968 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/test-preload-300000/config.json ...
	I0806 00:50:50.053333    3968 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/test-preload-300000/config.json: {Name:mk88285ffeba073e4df093611d44e9206a4a660e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:50:50.053344    3968 cache.go:107] acquiring lock: {Name:mk092792f1d077f24b78422b7c0bdf32a6e62d44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053344    3968 cache.go:107] acquiring lock: {Name:mk096cb4f0ad94d4920172177354c0e95b29152f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053378    3968 cache.go:107] acquiring lock: {Name:mk05d5c414a7016cd4bc5b3fea66e0e4c895aab5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053554    3968 cache.go:107] acquiring lock: {Name:mk6f49cd9b9ea06bfb2c38edb094760b8e38f450 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053592    3968 cache.go:107] acquiring lock: {Name:mkd3bc518b24c263ee00e71a285739950d51f2b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053610    3968 cache.go:107] acquiring lock: {Name:mk3757ad30e2394cd169589a50c264d09904fc78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053599    3968 cache.go:107] acquiring lock: {Name:mk42c4ffc8d2058bf4546ded3cfe6c2b92814040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053789    3968 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0806 00:50:50.053793    3968 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0806 00:50:50.053813    3968 start.go:360] acquireMachinesLock for test-preload-300000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:50:50.053809    3968 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0806 00:50:50.053858    3968 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0806 00:50:50.053868    3968 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:50:50.053886    3968 start.go:364] duration metric: took 58.333µs to acquireMachinesLock for "test-preload-300000"
	I0806 00:50:50.053899    3968 start.go:93] Provisioning new machine with config: &{Name:test-preload-300000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-300000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:50:50.053931    3968 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:50:50.053594    3968 cache.go:107] acquiring lock: {Name:mkab2a34723ef0a1c5e0474ff6c4eba4aae85e3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:50:50.053983    3968 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0806 00:50:50.053855    3968 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:50:50.054059    3968 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:50:50.059172    3968 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:50:50.062375    3968 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0806 00:50:50.062423    3968 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0806 00:50:50.066195    3968 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:50:50.066218    3968 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0806 00:50:50.066247    3968 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0806 00:50:50.066269    3968 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:50:50.066316    3968 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0806 00:50:50.066375    3968 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:50:50.077815    3968 start.go:159] libmachine.API.Create for "test-preload-300000" (driver="qemu2")
	I0806 00:50:50.077836    3968 client.go:168] LocalClient.Create starting
	I0806 00:50:50.077905    3968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:50:50.077937    3968 main.go:141] libmachine: Decoding PEM data...
	I0806 00:50:50.077948    3968 main.go:141] libmachine: Parsing certificate...
	I0806 00:50:50.077991    3968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:50:50.078014    3968 main.go:141] libmachine: Decoding PEM data...
	I0806 00:50:50.078025    3968 main.go:141] libmachine: Parsing certificate...
	I0806 00:50:50.078408    3968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:50:50.286433    3968 main.go:141] libmachine: Creating SSH key...
	I0806 00:50:50.398435    3968 main.go:141] libmachine: Creating Disk image...
	I0806 00:50:50.398477    3968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:50:50.398734    3968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2
	I0806 00:50:50.408716    3968 main.go:141] libmachine: STDOUT: 
	I0806 00:50:50.408736    3968 main.go:141] libmachine: STDERR: 
	I0806 00:50:50.408801    3968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2 +20000M
	I0806 00:50:50.417889    3968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:50:50.417910    3968 main.go:141] libmachine: STDERR: 
	I0806 00:50:50.417923    3968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2
	I0806 00:50:50.417928    3968 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:50:50.417943    3968 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:50:50.417969    3968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:d9:40:64:67:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2
	I0806 00:50:50.419994    3968 main.go:141] libmachine: STDOUT: 
	I0806 00:50:50.420017    3968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:50:50.420037    3968 client.go:171] duration metric: took 342.197583ms to LocalClient.Create
	I0806 00:50:50.526360    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0806 00:50:50.531685    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0806 00:50:50.539875    3968 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0806 00:50:50.539897    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0806 00:50:50.554615    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0806 00:50:50.557077    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0806 00:50:50.594252    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0806 00:50:50.653202    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0806 00:50:50.653238    3968 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 599.710208ms
	I0806 00:50:50.653260    3968 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0806 00:50:50.671284    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0806 00:50:51.047563    3968 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0806 00:50:51.047660    3968 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 00:50:51.315703    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0806 00:50:51.315778    3968 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.262433417s
	I0806 00:50:51.315809    3968 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0806 00:50:52.254549    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0806 00:50:52.254589    3968 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.201030083s
	I0806 00:50:52.254625    3968 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0806 00:50:52.420292    3968 start.go:128] duration metric: took 2.366352834s to createHost
	I0806 00:50:52.420338    3968 start.go:83] releasing machines lock for "test-preload-300000", held for 2.36645625s
	W0806 00:50:52.420423    3968 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:50:52.434591    3968 out.go:177] * Deleting "test-preload-300000" in qemu2 ...
	W0806 00:50:52.463850    3968 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:50:52.463875    3968 start.go:729] Will try again in 5 seconds ...
	I0806 00:50:53.268142    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0806 00:50:53.268194    3968 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.214635875s
	I0806 00:50:53.268220    3968 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0806 00:50:53.930709    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0806 00:50:53.930760    3968 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 3.877431959s
	I0806 00:50:53.930784    3968 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0806 00:50:54.878268    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0806 00:50:54.878347    3968 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.825032917s
	I0806 00:50:54.878382    3968 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0806 00:50:56.084479    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0806 00:50:56.084530    3968 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.030953042s
	I0806 00:50:56.084680    3968 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0806 00:50:57.464114    3968 start.go:360] acquireMachinesLock for test-preload-300000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:50:57.464655    3968 start.go:364] duration metric: took 452.625µs to acquireMachinesLock for "test-preload-300000"
	I0806 00:50:57.464790    3968 start.go:93] Provisioning new machine with config: &{Name:test-preload-300000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-300000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:50:57.465080    3968 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:50:57.475836    3968 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:50:57.526697    3968 start.go:159] libmachine.API.Create for "test-preload-300000" (driver="qemu2")
	I0806 00:50:57.526749    3968 client.go:168] LocalClient.Create starting
	I0806 00:50:57.526875    3968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:50:57.526936    3968 main.go:141] libmachine: Decoding PEM data...
	I0806 00:50:57.526966    3968 main.go:141] libmachine: Parsing certificate...
	I0806 00:50:57.527026    3968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:50:57.527071    3968 main.go:141] libmachine: Decoding PEM data...
	I0806 00:50:57.527085    3968 main.go:141] libmachine: Parsing certificate...
	I0806 00:50:57.527605    3968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:50:57.687418    3968 main.go:141] libmachine: Creating SSH key...
	I0806 00:50:57.856742    3968 main.go:141] libmachine: Creating Disk image...
	I0806 00:50:57.856756    3968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:50:57.856975    3968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2
	I0806 00:50:57.866886    3968 main.go:141] libmachine: STDOUT: 
	I0806 00:50:57.866906    3968 main.go:141] libmachine: STDERR: 
	I0806 00:50:57.866949    3968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2 +20000M
	I0806 00:50:57.874862    3968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:50:57.874886    3968 main.go:141] libmachine: STDERR: 
	I0806 00:50:57.874901    3968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2
	I0806 00:50:57.874913    3968 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:50:57.874921    3968 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:50:57.874957    3968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:84:ea:9b:8f:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/test-preload-300000/disk.qcow2
	I0806 00:50:57.876643    3968 main.go:141] libmachine: STDOUT: 
	I0806 00:50:57.876657    3968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:50:57.876674    3968 client.go:171] duration metric: took 349.922625ms to LocalClient.Create
	I0806 00:50:59.388268    3968 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0806 00:50:59.388348    3968 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.334839416s
	I0806 00:50:59.388393    3968 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0806 00:50:59.388437    3968 cache.go:87] Successfully saved all images to host disk.
	I0806 00:50:59.878847    3968 start.go:128] duration metric: took 2.41375025s to createHost
	I0806 00:50:59.878881    3968 start.go:83] releasing machines lock for "test-preload-300000", held for 2.414214166s
	W0806 00:50:59.879170    3968 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-300000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-300000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:50:59.895758    3968 out.go:177] 
	W0806 00:50:59.899660    3968 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:50:59.899683    3968 out.go:239] * 
	* 
	W0806 00:50:59.902317    3968 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:50:59.915654    3968 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-300000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-06 00:50:59.933818 -0700 PDT m=+2807.235063543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-300000 -n test-preload-300000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-300000 -n test-preload-300000: exit status 7 (65.276084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-300000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-300000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-300000
--- FAIL: TestPreload (10.15s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-155000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-155000 --memory=2048 --driver=qemu2 : exit status 80 (9.896683834s)

                                                
                                                
-- stdout --
	* [scheduled-stop-155000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-155000" primary control-plane node in "scheduled-stop-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-155000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-155000" primary control-plane node in "scheduled-stop-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-06 00:51:09.976492 -0700 PDT m=+2817.277802168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-155000 -n scheduled-stop-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-155000 -n scheduled-stop-155000: exit status 7 (67.27375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-155000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-155000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (12.62s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3651837929 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3651837929 version: (1.069348209s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-860000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-860000 --memory=2600 --driver=qemu2 : exit status 80 (9.917591166s)

                                                
                                                
-- stdout --
	* [skaffold-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-860000" primary control-plane node in "skaffold-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-860000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-860000" primary control-plane node in "skaffold-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-06 00:51:22.589815 -0700 PDT m=+2829.891207376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-860000 -n skaffold-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-860000 -n skaffold-860000: exit status 7 (64.326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-860000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-860000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-860000
--- FAIL: TestSkaffold (12.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (592.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.263502696 start -p running-upgrade-217000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.263502696 start -p running-upgrade-217000 --memory=2200 --vm-driver=qemu2 : (54.975275166s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-217000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0806 00:53:35.460785    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:53:51.490627    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-217000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.649514167s)

                                                
                                                
-- stdout --
	* [running-upgrade-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-217000" primary control-plane node in "running-upgrade-217000" cluster
	* Updating the running qemu2 "running-upgrade-217000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:00.866536    4369 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:00.866674    4369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:00.866677    4369 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:00.866680    4369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:00.866828    4369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:53:00.867886    4369 out.go:298] Setting JSON to false
	I0806 00:53:00.884323    4369 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3148,"bootTime":1722927632,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:53:00.884391    4369 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:53:00.889265    4369 out.go:177] * [running-upgrade-217000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:53:00.896182    4369 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:53:00.896215    4369 notify.go:220] Checking for updates...
	I0806 00:53:00.903114    4369 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:53:00.906176    4369 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:53:00.909141    4369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:53:00.912131    4369 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:53:00.915185    4369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:53:00.918420    4369 config.go:182] Loaded profile config "running-upgrade-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:53:00.922126    4369 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 00:53:00.925144    4369 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:53:00.929192    4369 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:53:00.936119    4369 start.go:297] selected driver: qemu2
	I0806 00:53:00.936125    4369 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50262 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:53:00.936175    4369 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:53:00.938423    4369 cni.go:84] Creating CNI manager for ""
	I0806 00:53:00.938440    4369 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:53:00.938464    4369 start.go:340] cluster config:
	{Name:running-upgrade-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50262 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:53:00.938516    4369 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:53:00.944119    4369 out.go:177] * Starting "running-upgrade-217000" primary control-plane node in "running-upgrade-217000" cluster
	I0806 00:53:00.948148    4369 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0806 00:53:00.948165    4369 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0806 00:53:00.948172    4369 cache.go:56] Caching tarball of preloaded images
	I0806 00:53:00.948226    4369 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:53:00.948231    4369 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0806 00:53:00.948297    4369 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/config.json ...
	I0806 00:53:00.948780    4369 start.go:360] acquireMachinesLock for running-upgrade-217000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:53:00.948812    4369 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "running-upgrade-217000"
	I0806 00:53:00.948820    4369 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:53:00.948826    4369 fix.go:54] fixHost starting: 
	I0806 00:53:00.949400    4369 fix.go:112] recreateIfNeeded on running-upgrade-217000: state=Running err=<nil>
	W0806 00:53:00.949409    4369 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:53:00.953113    4369 out.go:177] * Updating the running qemu2 "running-upgrade-217000" VM ...
	I0806 00:53:00.960951    4369 machine.go:94] provisionDockerMachine start ...
	I0806 00:53:00.961022    4369 main.go:141] libmachine: Using SSH client type: native
	I0806 00:53:00.961157    4369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b16a10] 0x100b19270 <nil>  [] 0s} localhost 50230 <nil> <nil>}
	I0806 00:53:00.961163    4369 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:53:01.028562    4369 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-217000
	
	I0806 00:53:01.028578    4369 buildroot.go:166] provisioning hostname "running-upgrade-217000"
	I0806 00:53:01.028622    4369 main.go:141] libmachine: Using SSH client type: native
	I0806 00:53:01.028740    4369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b16a10] 0x100b19270 <nil>  [] 0s} localhost 50230 <nil> <nil>}
	I0806 00:53:01.028746    4369 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-217000 && echo "running-upgrade-217000" | sudo tee /etc/hostname
	I0806 00:53:01.096901    4369 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-217000
	
	I0806 00:53:01.096943    4369 main.go:141] libmachine: Using SSH client type: native
	I0806 00:53:01.097052    4369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b16a10] 0x100b19270 <nil>  [] 0s} localhost 50230 <nil> <nil>}
	I0806 00:53:01.097062    4369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-217000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-217000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-217000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:53:01.161339    4369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:53:01.161352    4369 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-965/.minikube}
	I0806 00:53:01.161361    4369 buildroot.go:174] setting up certificates
	I0806 00:53:01.161366    4369 provision.go:84] configureAuth start
	I0806 00:53:01.161373    4369 provision.go:143] copyHostCerts
	I0806 00:53:01.161436    4369 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem, removing ...
	I0806 00:53:01.161441    4369 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem
	I0806 00:53:01.161566    4369 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem (1082 bytes)
	I0806 00:53:01.161762    4369 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem, removing ...
	I0806 00:53:01.161765    4369 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem
	I0806 00:53:01.161807    4369 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem (1123 bytes)
	I0806 00:53:01.161902    4369 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem, removing ...
	I0806 00:53:01.161905    4369 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem
	I0806 00:53:01.161955    4369 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem (1675 bytes)
	I0806 00:53:01.162042    4369 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-217000 san=[127.0.0.1 localhost minikube running-upgrade-217000]
	I0806 00:53:01.338838    4369 provision.go:177] copyRemoteCerts
	I0806 00:53:01.338897    4369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:53:01.338908    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	I0806 00:53:01.374807    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:53:01.381581    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 00:53:01.388331    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:53:01.395311    4369 provision.go:87] duration metric: took 233.942709ms to configureAuth
	I0806 00:53:01.395320    4369 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:53:01.395425    4369 config.go:182] Loaded profile config "running-upgrade-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:53:01.395458    4369 main.go:141] libmachine: Using SSH client type: native
	I0806 00:53:01.395542    4369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b16a10] 0x100b19270 <nil>  [] 0s} localhost 50230 <nil> <nil>}
	I0806 00:53:01.395546    4369 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:53:01.461219    4369 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:53:01.461230    4369 buildroot.go:70] root file system type: tmpfs
	I0806 00:53:01.461285    4369 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:53:01.461350    4369 main.go:141] libmachine: Using SSH client type: native
	I0806 00:53:01.461474    4369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b16a10] 0x100b19270 <nil>  [] 0s} localhost 50230 <nil> <nil>}
	I0806 00:53:01.461506    4369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:53:01.531190    4369 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:53:01.531244    4369 main.go:141] libmachine: Using SSH client type: native
	I0806 00:53:01.531370    4369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b16a10] 0x100b19270 <nil>  [] 0s} localhost 50230 <nil> <nil>}
	I0806 00:53:01.531381    4369 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:53:01.599228    4369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:53:01.599238    4369 machine.go:97] duration metric: took 638.28ms to provisionDockerMachine
	I0806 00:53:01.599242    4369 start.go:293] postStartSetup for "running-upgrade-217000" (driver="qemu2")
	I0806 00:53:01.599248    4369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:53:01.599295    4369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:53:01.599303    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	I0806 00:53:01.633544    4369 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:53:01.634887    4369 info.go:137] Remote host: Buildroot 2021.02.12
	I0806 00:53:01.634894    4369 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/addons for local assets ...
	I0806 00:53:01.634972    4369 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/files for local assets ...
	I0806 00:53:01.635074    4369 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem -> 14552.pem in /etc/ssl/certs
	I0806 00:53:01.635171    4369 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:53:01.637721    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:53:01.644664    4369 start.go:296] duration metric: took 45.417666ms for postStartSetup
	I0806 00:53:01.644676    4369 fix.go:56] duration metric: took 695.856375ms for fixHost
	I0806 00:53:01.644707    4369 main.go:141] libmachine: Using SSH client type: native
	I0806 00:53:01.644808    4369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100b16a10] 0x100b19270 <nil>  [] 0s} localhost 50230 <nil> <nil>}
	I0806 00:53:01.644813    4369 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:53:01.709635    4369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722930781.842705763
	
	I0806 00:53:01.709643    4369 fix.go:216] guest clock: 1722930781.842705763
	I0806 00:53:01.709647    4369 fix.go:229] Guest: 2024-08-06 00:53:01.842705763 -0700 PDT Remote: 2024-08-06 00:53:01.644679 -0700 PDT m=+0.797946460 (delta=198.026763ms)
	I0806 00:53:01.709657    4369 fix.go:200] guest clock delta is within tolerance: 198.026763ms
	I0806 00:53:01.709660    4369 start.go:83] releasing machines lock for "running-upgrade-217000", held for 760.849125ms
	I0806 00:53:01.709722    4369 ssh_runner.go:195] Run: cat /version.json
	I0806 00:53:01.709732    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	I0806 00:53:01.709722    4369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:53:01.709762    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	W0806 00:53:01.710318    4369 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50336->127.0.0.1:50230: write: broken pipe
	I0806 00:53:01.710336    4369 retry.go:31] will retry after 153.683981ms: ssh: handshake failed: write tcp 127.0.0.1:50336->127.0.0.1:50230: write: broken pipe
	W0806 00:53:01.743346    4369 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0806 00:53:01.743405    4369 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:01.745206    4369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:53:01.746829    4369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:53:01.746870    4369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0806 00:53:01.749619    4369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0806 00:53:01.754044    4369 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:53:01.754055    4369 start.go:495] detecting cgroup driver to use...
	I0806 00:53:01.754124    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:53:01.759035    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0806 00:53:01.761866    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:53:01.764974    4369 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:53:01.764994    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:53:01.768386    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:53:01.771959    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:53:01.774987    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:53:01.777691    4369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:53:01.780640    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:53:01.783948    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:53:01.787282    4369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:53:01.790096    4369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:53:01.792813    4369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:53:01.795968    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:53:01.890454    4369 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:53:01.900986    4369 start.go:495] detecting cgroup driver to use...
	I0806 00:53:01.901071    4369 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:53:01.913311    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:53:01.948884    4369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:53:01.977105    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:53:01.982059    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:53:01.986754    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:53:01.992378    4369 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:53:01.993649    4369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:53:01.996271    4369 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:53:02.001116    4369 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:53:02.105449    4369 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:53:02.202261    4369 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:53:02.202327    4369 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:53:02.207242    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:53:02.287414    4369 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:53:04.859785    4369 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.57237075s)
	I0806 00:53:04.859851    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:53:04.864744    4369 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:53:04.871492    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:53:04.876147    4369 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:53:04.947109    4369 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:53:05.023229    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:53:05.099960    4369 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:53:05.105879    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:53:05.110326    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:53:05.191853    4369 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:53:05.231924    4369 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:53:05.232016    4369 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:53:05.234991    4369 start.go:563] Will wait 60s for crictl version
	I0806 00:53:05.235037    4369 ssh_runner.go:195] Run: which crictl
	I0806 00:53:05.236426    4369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:53:05.248601    4369 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0806 00:53:05.248665    4369 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:53:05.262846    4369 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:53:05.283022    4369 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0806 00:53:05.283165    4369 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0806 00:53:05.284550    4369 kubeadm.go:883] updating cluster {Name:running-upgrade-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50262 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0806 00:53:05.284590    4369 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0806 00:53:05.284630    4369 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:53:05.295169    4369 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:53:05.295178    4369 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0806 00:53:05.295218    4369 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:53:05.298125    4369 ssh_runner.go:195] Run: which lz4
	I0806 00:53:05.299400    4369 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0806 00:53:05.300674    4369 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:53:05.300688    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0806 00:53:06.209030    4369 docker.go:649] duration metric: took 909.664875ms to copy over tarball
	I0806 00:53:06.209082    4369 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:53:07.348248    4369 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.139160375s)
	I0806 00:53:07.348268    4369 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:53:07.365082    4369 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:53:07.369285    4369 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0806 00:53:07.374565    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:53:07.461885    4369 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:53:08.664189    4369 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.202292875s)
	I0806 00:53:08.664294    4369 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:53:08.686978    4369 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:53:08.686987    4369 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0806 00:53:08.686993    4369 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 00:53:08.690821    4369 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:53:08.692245    4369 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:53:08.694424    4369 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:53:08.694526    4369 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:53:08.696663    4369 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:53:08.696663    4369 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:53:08.697805    4369 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:53:08.698120    4369 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:53:08.699303    4369 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:53:08.699432    4369 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:53:08.700449    4369 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:53:08.700450    4369 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:53:08.701657    4369 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0806 00:53:08.701749    4369 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:53:08.702821    4369 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:53:08.703412    4369 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0806 00:53:09.112318    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:53:09.127828    4369 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0806 00:53:09.127858    4369 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:53:09.127914    4369 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:53:09.128582    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:53:09.142764    4369 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0806 00:53:09.142784    4369 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:53:09.142838    4369 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:53:09.142853    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0806 00:53:09.150001    4369 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0806 00:53:09.150137    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:53:09.152180    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0806 00:53:09.154478    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0806 00:53:09.154914    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:53:09.173508    4369 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0806 00:53:09.173531    4369 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:53:09.173585    4369 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:53:09.174511    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:53:09.175809    4369 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0806 00:53:09.175821    4369 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:53:09.175848    4369 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0806 00:53:09.176389    4369 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0806 00:53:09.176400    4369 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:53:09.176426    4369 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:53:09.182258    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0806 00:53:09.193096    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0806 00:53:09.193231    4369 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0806 00:53:09.195287    4369 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0806 00:53:09.195310    4369 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:53:09.195360    4369 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:53:09.208324    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0806 00:53:09.208450    4369 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0806 00:53:09.216191    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0806 00:53:09.218942    4369 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0806 00:53:09.218959    4369 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0806 00:53:09.218977    4369 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0806 00:53:09.218993    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0806 00:53:09.218999    4369 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0806 00:53:09.219028    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0806 00:53:09.219039    4369 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0806 00:53:09.219054    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0806 00:53:09.244262    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0806 00:53:09.244384    4369 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0806 00:53:09.264397    4369 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0806 00:53:09.264430    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0806 00:53:09.294797    4369 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0806 00:53:09.294894    4369 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:53:09.306576    4369 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0806 00:53:09.306588    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0806 00:53:09.339569    4369 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0806 00:53:09.339592    4369 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:53:09.339654    4369 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:53:09.375683    4369 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0806 00:53:09.375705    4369 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0806 00:53:09.375711    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0806 00:53:10.378811    4369 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.039136542s)
	I0806 00:53:10.378840    4369 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 00:53:10.378861    4369 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load": (1.003140167s)
	I0806 00:53:10.378880    4369 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0806 00:53:10.378915    4369 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0806 00:53:10.378932    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0806 00:53:10.379139    4369 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0806 00:53:10.385861    4369 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0806 00:53:10.385917    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0806 00:53:10.574490    4369 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0806 00:53:10.574516    4369 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 00:53:10.574523    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0806 00:53:10.863858    4369 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 00:53:10.863902    4369 cache_images.go:92] duration metric: took 2.176917208s to LoadCachedImages
	W0806 00:53:10.863956    4369 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0806 00:53:10.863962    4369 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0806 00:53:10.864016    4369 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-217000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:53:10.864084    4369 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:53:10.894082    4369 cni.go:84] Creating CNI manager for ""
	I0806 00:53:10.894109    4369 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:53:10.894114    4369 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:53:10.894124    4369 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-217000 NodeName:running-upgrade-217000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:53:10.894193    4369 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-217000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:53:10.894253    4369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0806 00:53:10.897632    4369 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:53:10.897670    4369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:53:10.901194    4369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0806 00:53:10.914423    4369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:53:10.920926    4369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0806 00:53:10.928625    4369 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0806 00:53:10.930169    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:53:11.073535    4369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:53:11.083262    4369 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000 for IP: 10.0.2.15
	I0806 00:53:11.083271    4369 certs.go:194] generating shared ca certs ...
	I0806 00:53:11.083279    4369 certs.go:226] acquiring lock for ca certs: {Name:mkb2ca998ea1a45f9f580d4d76a58064c889c60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:53:11.083487    4369 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key
	I0806 00:53:11.083524    4369 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key
	I0806 00:53:11.083529    4369 certs.go:256] generating profile certs ...
	I0806 00:53:11.083585    4369 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/client.key
	I0806 00:53:11.083600    4369 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.key.eb15edaf
	I0806 00:53:11.083610    4369 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.crt.eb15edaf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0806 00:53:11.232482    4369 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.crt.eb15edaf ...
	I0806 00:53:11.232498    4369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.crt.eb15edaf: {Name:mk94659caadcb9b17e1bab0cd70b819dee43568a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:53:11.232791    4369 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.key.eb15edaf ...
	I0806 00:53:11.232796    4369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.key.eb15edaf: {Name:mkdc0cbf849789582206b65c315aa4eeabc53ef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:53:11.232932    4369 certs.go:381] copying /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.crt.eb15edaf -> /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.crt
	I0806 00:53:11.233059    4369 certs.go:385] copying /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.key.eb15edaf -> /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.key
	I0806 00:53:11.233193    4369 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/proxy-client.key
	I0806 00:53:11.233325    4369 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem (1338 bytes)
	W0806 00:53:11.233347    4369 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455_empty.pem, impossibly tiny 0 bytes
	I0806 00:53:11.233353    4369 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:53:11.233377    4369 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:53:11.233395    4369 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:53:11.233413    4369 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem (1675 bytes)
	I0806 00:53:11.233450    4369 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:53:11.233806    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:53:11.240478    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:53:11.256136    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:53:11.264211    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:53:11.278275    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 00:53:11.285571    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 00:53:11.306248    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:53:11.317008    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 00:53:11.323551    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1708 bytes)
	I0806 00:53:11.329794    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:53:11.351915    4369 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem --> /usr/share/ca-certificates/1455.pem (1338 bytes)
	I0806 00:53:11.365338    4369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:53:11.372705    4369 ssh_runner.go:195] Run: openssl version
	I0806 00:53:11.378060    4369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:53:11.388787    4369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:53:11.390317    4369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:53:11.390338    4369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:53:11.399231    4369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:53:11.404222    4369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455.pem && ln -fs /usr/share/ca-certificates/1455.pem /etc/ssl/certs/1455.pem"
	I0806 00:53:11.407216    4369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455.pem
	I0806 00:53:11.408602    4369 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:12 /usr/share/ca-certificates/1455.pem
	I0806 00:53:11.408621    4369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455.pem
	I0806 00:53:11.410417    4369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1455.pem /etc/ssl/certs/51391683.0"
	I0806 00:53:11.414481    4369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14552.pem && ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem"
	I0806 00:53:11.418941    4369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I0806 00:53:11.422903    4369 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:12 /usr/share/ca-certificates/14552.pem
	I0806 00:53:11.422921    4369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I0806 00:53:11.424826    4369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:53:11.430137    4369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:53:11.431654    4369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:53:11.439441    4369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:53:11.441320    4369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:53:11.443080    4369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:53:11.445100    4369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:53:11.448047    4369 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:53:11.450653    4369 kubeadm.go:392] StartCluster: {Name:running-upgrade-217000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50262 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-217000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:53:11.450721    4369 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:53:11.498417    4369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:53:11.504707    4369 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 00:53:11.504712    4369 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 00:53:11.504733    4369 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 00:53:11.511569    4369 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:11.511792    4369 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-217000" does not appear in /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:53:11.511849    4369 kubeconfig.go:62] /Users/jenkins/minikube-integration/19370-965/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-217000" cluster setting kubeconfig missing "running-upgrade-217000" context setting]
	I0806 00:53:11.511969    4369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:53:11.513296    4369 kapi.go:59] client config for running-upgrade-217000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eabf90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:53:11.513627    4369 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 00:53:11.518029    4369 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-217000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0806 00:53:11.518036    4369 kubeadm.go:1160] stopping kube-system containers ...
	I0806 00:53:11.518083    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:53:11.562380    4369 docker.go:483] Stopping containers: [a30aa9e17223 9b1a1d475261 7b6697518910 059269d0bcff 91d37604f99c 3b8147739ae8 fc3b44029d63 1c55bb464063 29d7ad21afd2 79047a3dcae7 5f751153bd2e de9b53846284 26a0b0cd7ff2 80d3f1373eae c948e3c52954 fbacaf13dc1c 0ab1007fb54f eda1d553eed5 e0a83f137cff ee82a5823d28]
	I0806 00:53:11.562454    4369 ssh_runner.go:195] Run: docker stop a30aa9e17223 9b1a1d475261 7b6697518910 059269d0bcff 91d37604f99c 3b8147739ae8 fc3b44029d63 1c55bb464063 29d7ad21afd2 79047a3dcae7 5f751153bd2e de9b53846284 26a0b0cd7ff2 80d3f1373eae c948e3c52954 fbacaf13dc1c 0ab1007fb54f eda1d553eed5 e0a83f137cff ee82a5823d28
	I0806 00:53:11.800511    4369 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 00:53:11.889013    4369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:53:11.893154    4369 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug  6 07:52 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug  6 07:52 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  6 07:52 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug  6 07:52 /etc/kubernetes/scheduler.conf
	
	I0806 00:53:11.893206    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/admin.conf
	I0806 00:53:11.896595    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:11.896629    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:53:11.899504    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/kubelet.conf
	I0806 00:53:11.906100    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:11.906125    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:53:11.917787    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/controller-manager.conf
	I0806 00:53:11.922157    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:11.922195    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:53:11.924968    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/scheduler.conf
	I0806 00:53:11.930805    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:11.930849    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:53:11.934811    4369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:53:11.937509    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:53:11.963680    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:53:12.464265    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:53:12.671267    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:53:12.703220    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:53:12.729539    4369 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:53:12.729613    4369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:13.231978    4369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:13.731655    4369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:13.739366    4369 api_server.go:72] duration metric: took 1.009835875s to wait for apiserver process to appear ...
	I0806 00:53:13.739376    4369 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:53:13.739386    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:18.741450    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:18.741497    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:23.741901    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:23.741982    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:28.742675    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:28.742754    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:33.743576    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:33.743661    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:38.745011    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:38.745089    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:43.746678    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:43.746763    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:48.748880    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:48.748974    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:53.751555    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:53.751659    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:53:58.754327    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:53:58.754405    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:03.756860    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:03.756946    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:08.759597    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:08.759677    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:13.762288    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:13.762513    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:54:13.776652    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:54:13.776733    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:54:13.787287    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:54:13.787356    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:54:13.797922    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:54:13.797992    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:54:13.809372    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:54:13.809441    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:54:13.819576    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:54:13.819654    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:54:13.829876    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:54:13.829948    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:54:13.839481    4369 logs.go:276] 0 containers: []
	W0806 00:54:13.839493    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:54:13.839545    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:54:13.850133    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:54:13.850158    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:54:13.850163    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:54:13.862236    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:54:13.862250    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:54:13.873802    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:54:13.873814    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:54:13.910026    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:54:13.910033    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:54:13.914481    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:54:13.914489    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:54:13.925966    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:54:13.925978    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:54:13.937964    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:54:13.937980    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:54:13.949233    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:54:13.949247    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:54:13.965057    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:54:13.965069    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:54:13.986915    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:54:13.986927    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:54:14.003849    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:54:14.003860    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:54:14.029045    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:54:14.029053    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:54:14.040791    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:54:14.040804    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:54:14.114129    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:54:14.114144    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:54:14.126571    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:54:14.126581    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:54:14.140743    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:54:14.140754    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:54:14.151623    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:54:14.151653    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:54:16.665386    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:21.667818    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:21.668213    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:54:21.721298    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:54:21.721426    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:54:21.742232    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:54:21.742317    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:54:21.755422    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:54:21.755491    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:54:21.769700    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:54:21.769781    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:54:21.780174    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:54:21.780239    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:54:21.790766    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:54:21.790837    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:54:21.800890    4369 logs.go:276] 0 containers: []
	W0806 00:54:21.800901    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:54:21.800968    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:54:21.811541    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:54:21.811559    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:54:21.811564    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:54:21.822823    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:54:21.822842    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:54:21.859138    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:54:21.859149    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:54:21.873075    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:54:21.873085    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:54:21.884505    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:54:21.884519    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:54:21.895997    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:54:21.896010    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:54:21.913045    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:54:21.913057    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:54:21.924582    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:54:21.924595    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:54:21.935990    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:54:21.936002    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:54:21.940208    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:54:21.940216    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:54:21.954105    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:54:21.954118    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:54:21.965076    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:54:21.965086    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:54:21.976991    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:54:21.977003    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:54:22.001927    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:54:22.001940    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:54:22.013177    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:54:22.013191    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:54:22.040367    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:54:22.040376    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:54:22.078141    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:54:22.078151    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:54:24.594632    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:29.597445    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:29.597905    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:54:29.637567    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:54:29.637701    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:54:29.663017    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:54:29.663133    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:54:29.677927    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:54:29.678002    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:54:29.690287    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:54:29.690350    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:54:29.700767    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:54:29.700831    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:54:29.711314    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:54:29.711383    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:54:29.721384    4369 logs.go:276] 0 containers: []
	W0806 00:54:29.721396    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:54:29.721453    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:54:29.731641    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:54:29.731667    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:54:29.731673    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:54:29.735917    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:54:29.735923    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:54:29.750080    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:54:29.750091    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:54:29.761375    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:54:29.761388    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:54:29.774925    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:54:29.774936    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:54:29.800840    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:54:29.800851    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:54:29.837605    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:54:29.837611    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:54:29.872859    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:54:29.872873    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:54:29.884924    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:54:29.884937    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:54:29.901810    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:54:29.901821    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:54:29.919589    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:54:29.919602    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:54:29.931177    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:54:29.931189    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:54:29.942887    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:54:29.942897    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:54:29.957155    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:54:29.957164    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:54:29.968824    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:54:29.968837    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:54:29.980757    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:54:29.980768    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:54:29.991650    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:54:29.991659    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:54:32.504773    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:37.507509    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:37.507899    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:54:37.551922    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:54:37.552048    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:54:37.571649    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:54:37.571743    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:54:37.585596    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:54:37.585665    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:54:37.597657    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:54:37.597731    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:54:37.608371    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:54:37.608433    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:54:37.618493    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:54:37.618551    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:54:37.629152    4369 logs.go:276] 0 containers: []
	W0806 00:54:37.629165    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:54:37.629222    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:54:37.639365    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:54:37.639381    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:54:37.639386    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:54:37.667096    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:54:37.667105    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:54:37.704134    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:54:37.704140    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:54:37.717934    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:54:37.717945    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:54:37.729782    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:54:37.729795    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:54:37.746720    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:54:37.746730    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:54:37.757603    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:54:37.757613    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:54:37.771335    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:54:37.771346    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:54:37.788961    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:54:37.788974    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:54:37.793229    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:54:37.793237    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:54:37.805335    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:54:37.805345    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:54:37.818309    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:54:37.818331    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:54:37.837572    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:54:37.837583    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:54:37.849515    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:54:37.849527    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:54:37.887578    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:54:37.887592    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:54:37.901446    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:54:37.901457    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:54:37.923568    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:54:37.923577    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:54:40.437136    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:45.438605    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:45.439023    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:54:45.487640    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:54:45.487766    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:54:45.511157    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:54:45.511252    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:54:45.531692    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:54:45.531754    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:54:45.542914    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:54:45.542974    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:54:45.553316    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:54:45.553371    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:54:45.563703    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:54:45.563772    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:54:45.580195    4369 logs.go:276] 0 containers: []
	W0806 00:54:45.580211    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:54:45.580274    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:54:45.591080    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:54:45.591101    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:54:45.591107    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:54:45.603042    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:54:45.603053    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:54:45.607761    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:54:45.607771    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:54:45.625976    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:54:45.625987    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:54:45.643063    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:54:45.643074    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:54:45.668087    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:54:45.668097    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:54:45.679166    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:54:45.679179    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:54:45.717136    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:54:45.717145    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:54:45.730987    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:54:45.731000    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:54:45.742851    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:54:45.742861    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:54:45.753619    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:54:45.753632    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:54:45.790943    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:54:45.790956    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:54:45.802181    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:54:45.802190    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:54:45.813922    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:54:45.813933    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:54:45.827154    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:54:45.827167    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:54:45.838528    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:54:45.838536    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:54:45.850141    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:54:45.850157    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:54:48.365914    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:54:53.368650    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:54:53.368974    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:54:53.409361    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:54:53.409500    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:54:53.430553    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:54:53.430674    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:54:53.446240    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:54:53.446320    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:54:53.458148    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:54:53.458225    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:54:53.474150    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:54:53.474218    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:54:53.485387    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:54:53.485457    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:54:53.496082    4369 logs.go:276] 0 containers: []
	W0806 00:54:53.496096    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:54:53.496150    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:54:53.507323    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:54:53.507343    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:54:53.507348    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:54:53.541700    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:54:53.541713    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:54:53.560586    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:54:53.560597    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:54:53.572971    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:54:53.572985    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:54:53.584446    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:54:53.584459    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:54:53.602222    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:54:53.602232    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:54:53.613650    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:54:53.613659    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:54:53.630404    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:54:53.630416    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:54:53.643268    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:54:53.643279    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:54:53.647774    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:54:53.647781    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:54:53.661592    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:54:53.661601    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:54:53.673659    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:54:53.673671    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:54:53.687884    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:54:53.687896    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:54:53.699617    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:54:53.699630    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:54:53.735597    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:54:53.735607    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:54:53.747008    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:54:53.747021    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:54:53.758669    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:54:53.758679    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:54:56.287091    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:01.288109    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:01.288547    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:01.326107    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:01.326259    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:01.346934    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:01.347025    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:01.361789    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:01.361863    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:01.373467    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:01.373532    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:01.383961    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:01.384034    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:01.395137    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:01.395204    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:01.405839    4369 logs.go:276] 0 containers: []
	W0806 00:55:01.405850    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:01.405915    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:01.416999    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:01.417016    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:01.417022    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:01.430723    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:01.430736    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:01.443481    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:01.443492    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:01.455849    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:01.455859    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:01.472907    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:01.472918    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:01.499135    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:01.499152    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:01.503811    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:01.503819    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:01.522534    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:01.522546    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:01.533763    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:01.533774    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:01.550427    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:01.550438    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:01.588500    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:01.588508    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:01.602213    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:01.602227    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:01.615983    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:01.615993    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:01.627552    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:01.627563    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:01.664913    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:01.664924    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:01.676252    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:01.676264    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:01.687558    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:01.687571    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:04.201203    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:09.203409    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:09.203798    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:09.237646    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:09.237768    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:09.266983    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:09.267061    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:09.279994    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:09.280066    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:09.297924    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:09.297989    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:09.308655    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:09.308719    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:09.320588    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:09.320655    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:09.333910    4369 logs.go:276] 0 containers: []
	W0806 00:55:09.333922    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:09.333980    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:09.344583    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:09.344600    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:09.344605    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:09.360198    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:09.360211    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:09.375441    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:09.375454    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:09.413671    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:09.413679    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:09.425360    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:09.425371    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:09.437512    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:09.437524    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:09.472067    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:09.472080    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:09.490482    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:09.490495    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:09.511169    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:09.511182    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:09.522962    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:09.522972    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:09.547199    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:09.547208    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:09.564816    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:09.564828    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:09.578003    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:09.578013    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:09.592911    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:09.592921    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:09.604954    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:09.604966    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:09.616554    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:09.616568    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:09.620753    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:09.620761    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:12.134128    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:17.136477    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:17.136849    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:17.184443    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:17.184567    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:17.205033    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:17.205110    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:17.221350    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:17.221417    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:17.232934    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:17.233003    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:17.243665    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:17.243728    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:17.254204    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:17.254267    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:17.264527    4369 logs.go:276] 0 containers: []
	W0806 00:55:17.264537    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:17.264589    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:17.275148    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:17.275164    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:17.275170    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:17.313485    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:17.313493    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:17.371879    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:17.371894    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:17.383812    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:17.383824    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:17.397954    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:17.397968    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:17.409750    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:17.409765    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:17.427132    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:17.427143    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:17.441054    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:17.441066    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:17.452957    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:17.452967    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:17.470792    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:17.470803    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:17.482969    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:17.482983    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:17.496797    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:17.496807    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:17.521172    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:17.521182    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:17.525343    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:17.525353    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:17.539102    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:17.539111    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:17.551072    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:17.551084    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:17.563290    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:17.563304    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:20.077051    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:25.079619    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:25.079691    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:25.094467    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:25.094551    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:25.116929    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:25.117003    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:25.130551    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:25.130622    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:25.142671    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:25.142749    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:25.154977    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:25.155049    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:25.167194    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:25.167268    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:25.182164    4369 logs.go:276] 0 containers: []
	W0806 00:55:25.182178    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:25.182240    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:25.195064    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:25.195088    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:25.195094    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:25.211390    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:25.211403    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:25.223269    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:25.223283    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:25.264808    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:25.264822    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:25.276765    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:25.276776    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:25.288477    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:25.288488    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:25.310572    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:25.310582    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:25.322995    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:25.323007    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:25.335210    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:25.335222    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:25.347429    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:25.347442    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:25.352270    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:25.352283    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:25.370797    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:25.370810    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:25.384196    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:25.384216    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:25.414105    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:25.414123    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:25.426771    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:25.426785    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:25.466059    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:25.466070    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:25.482377    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:25.482398    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:27.998970    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:33.001670    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:33.001820    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:33.017297    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:33.017375    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:33.028176    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:33.028252    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:33.040296    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:33.040373    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:33.051680    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:33.051744    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:33.062678    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:33.062741    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:33.073662    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:33.073734    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:33.084387    4369 logs.go:276] 0 containers: []
	W0806 00:55:33.084398    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:33.084453    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:33.095092    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:33.095112    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:33.095118    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:33.109720    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:33.109730    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:33.121264    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:33.121275    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:33.158357    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:33.158364    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:33.198417    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:33.198430    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:33.223997    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:33.224011    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:33.238119    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:33.238129    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:33.249807    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:33.249819    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:33.267525    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:33.267534    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:33.279369    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:33.279381    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:33.291182    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:33.291192    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:33.303252    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:33.303263    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:33.324509    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:33.324518    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:33.328898    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:33.328907    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:33.345283    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:33.345293    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:33.356669    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:33.356681    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:33.368607    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:33.368618    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:35.887796    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:40.889966    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:40.890158    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:40.903975    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:40.904053    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:40.915665    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:40.915736    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:40.926290    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:40.926346    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:40.944421    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:40.944594    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:40.955391    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:40.955463    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:40.966790    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:40.966855    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:40.976695    4369 logs.go:276] 0 containers: []
	W0806 00:55:40.976706    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:40.976768    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:40.986754    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:40.986775    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:40.986780    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:41.007337    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:41.007347    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:41.018239    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:41.018250    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:41.029328    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:41.029343    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:41.041124    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:41.041136    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:41.052991    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:41.053002    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:41.070621    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:41.070632    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:41.082371    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:41.082380    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:41.087269    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:41.087275    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:41.100579    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:41.100587    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:41.112344    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:41.112352    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:41.126093    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:41.126101    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:41.138020    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:41.138029    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:41.149081    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:41.149093    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:41.174001    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:41.174008    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:41.211449    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:41.211456    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:41.245548    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:41.245562    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:43.758714    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:48.760965    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:48.761174    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:48.783606    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:48.783704    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:48.798377    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:48.798441    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:48.810527    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:48.810625    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:48.832453    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:48.832521    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:48.843879    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:48.843942    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:48.854525    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:48.854594    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:48.865426    4369 logs.go:276] 0 containers: []
	W0806 00:55:48.865438    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:48.865494    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:48.875867    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:48.875885    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:48.875890    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:48.912465    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:48.912473    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:48.948113    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:48.948129    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:48.966308    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:48.966319    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:48.977867    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:48.977880    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:49.003323    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:49.003330    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:49.014863    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:49.014877    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:49.026686    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:49.026698    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:49.041794    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:49.041804    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:49.059034    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:49.059044    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:49.071106    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:49.071118    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:49.082659    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:49.082670    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:49.094299    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:49.094309    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:49.099051    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:49.099057    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:49.116260    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:49.116273    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:49.132635    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:49.132647    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:49.144911    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:49.144923    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:51.666920    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:55:56.669078    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:55:56.669203    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:55:56.680812    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:55:56.680884    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:55:56.693532    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:55:56.693614    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:55:56.705009    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:55:56.705081    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:55:56.716813    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:55:56.716890    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:55:56.728797    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:55:56.728868    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:55:56.741622    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:55:56.741697    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:55:56.753565    4369 logs.go:276] 0 containers: []
	W0806 00:55:56.753582    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:55:56.753655    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:55:56.773460    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:55:56.773483    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:55:56.773489    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:55:56.786519    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:55:56.786536    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:55:56.800011    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:55:56.800023    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:55:56.812134    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:55:56.812147    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:55:56.827808    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:55:56.827824    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:55:56.842866    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:55:56.842881    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:55:56.861673    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:55:56.861689    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:55:56.874247    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:55:56.874258    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:55:56.911357    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:55:56.911371    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:55:56.925044    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:55:56.925055    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:55:56.943731    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:55:56.943743    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:55:56.957548    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:55:56.957561    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:55:56.984906    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:55:56.984925    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:55:57.000043    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:55:57.000055    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:55:57.040147    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:55:57.040169    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:55:57.045459    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:55:57.045472    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:55:57.060919    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:55:57.060931    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:55:59.576245    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:04.578762    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:04.579186    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:04.621598    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:04.621729    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:04.643651    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:04.643752    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:04.659078    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:04.659147    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:04.671833    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:04.671901    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:04.682542    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:04.682599    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:04.693512    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:04.693579    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:04.708111    4369 logs.go:276] 0 containers: []
	W0806 00:56:04.708126    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:04.708200    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:04.719148    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:04.719166    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:04.719172    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:04.732865    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:04.732878    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:04.744597    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:04.744611    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:04.756127    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:04.756138    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:04.773659    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:04.773674    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:04.785333    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:04.785346    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:04.804204    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:04.804220    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:04.816584    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:04.816595    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:04.839813    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:04.839826    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:04.857465    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:04.857475    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:04.869709    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:04.869721    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:04.874213    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:04.874222    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:04.909886    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:04.909895    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:04.921855    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:04.921867    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:04.933808    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:04.933820    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:04.972747    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:04.972755    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:04.990246    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:04.990257    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:07.516192    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:12.518915    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:12.519329    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:12.558555    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:12.558693    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:12.589740    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:12.589828    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:12.613721    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:12.613786    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:12.631193    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:12.631257    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:12.641891    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:12.641951    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:12.652571    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:12.652629    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:12.663845    4369 logs.go:276] 0 containers: []
	W0806 00:56:12.663861    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:12.663920    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:12.674870    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:12.674887    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:12.674894    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:12.686479    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:12.686490    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:12.699820    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:12.699832    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:12.711666    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:12.711677    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:12.723607    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:12.723619    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:12.741188    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:12.741204    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:12.753485    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:12.753496    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:12.778010    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:12.778024    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:12.816879    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:12.816890    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:12.831170    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:12.831182    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:12.851758    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:12.851769    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:12.863264    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:12.863276    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:12.888243    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:12.888252    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:12.892434    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:12.892440    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:12.927216    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:12.927225    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:12.943095    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:12.943104    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:12.957933    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:12.957949    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:15.477212    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:20.479516    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:20.479970    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:20.521422    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:20.521550    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:20.544102    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:20.544209    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:20.559229    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:20.559307    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:20.572366    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:20.572446    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:20.583542    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:20.583604    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:20.594466    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:20.594534    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:20.606221    4369 logs.go:276] 0 containers: []
	W0806 00:56:20.606232    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:20.606290    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:20.617783    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:20.617803    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:20.617809    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:20.657404    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:20.657419    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:20.676084    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:20.676093    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:20.688283    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:20.688293    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:20.700363    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:20.700377    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:20.715004    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:20.715015    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:20.729839    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:20.729851    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:20.742838    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:20.742850    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:20.748233    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:20.748241    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:20.785603    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:20.785614    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:20.800299    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:20.800309    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:20.812160    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:20.812175    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:20.831914    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:20.831928    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:20.844758    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:20.844768    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:20.856874    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:20.856886    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:20.869203    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:20.869214    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:20.880764    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:20.880775    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:23.407424    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:28.409778    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:28.410209    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:28.444695    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:28.444823    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:28.465874    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:28.465986    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:28.481076    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:28.481158    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:28.493463    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:28.493537    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:28.504899    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:28.504974    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:28.519692    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:28.519766    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:28.534470    4369 logs.go:276] 0 containers: []
	W0806 00:56:28.534479    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:28.534531    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:28.545348    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:28.545367    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:28.545372    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:28.558125    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:28.558138    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:28.570595    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:28.570609    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:28.587801    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:28.587813    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:28.599483    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:28.599499    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:28.611398    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:28.611412    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:28.615908    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:28.615915    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:28.650021    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:28.650032    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:28.676399    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:28.676411    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:28.693848    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:28.693858    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:28.714436    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:28.714448    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:28.726784    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:28.726796    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:28.737987    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:28.737997    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:28.774124    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:28.774135    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:28.788460    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:28.788473    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:28.800082    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:28.800097    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:28.815649    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:28.815661    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:31.343412    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:36.346141    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:36.346232    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:36.358306    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:36.358379    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:36.373320    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:36.373392    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:36.383855    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:36.383927    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:36.394919    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:36.394993    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:36.405942    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:36.406011    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:36.416787    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:36.416854    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:36.427240    4369 logs.go:276] 0 containers: []
	W0806 00:56:36.427252    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:36.427310    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:36.440805    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:36.440823    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:36.440829    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:36.455402    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:36.455416    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:36.473297    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:36.473311    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:36.484436    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:36.484449    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:36.498570    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:36.498584    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:36.533200    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:36.533215    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:36.545841    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:36.545854    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:36.569221    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:36.569230    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:36.605092    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:36.605100    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:36.609241    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:36.609250    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:36.623004    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:36.623015    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:36.635063    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:36.635079    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:36.646907    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:36.646917    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:36.661789    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:36.661800    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:36.673036    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:36.673049    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:36.690546    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:36.690557    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:36.706492    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:36.706504    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:39.220336    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:44.221832    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:44.221931    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:44.234559    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:44.234632    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:44.245889    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:44.245962    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:44.256404    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:44.256474    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:44.267433    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:44.267507    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:44.278481    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:44.278551    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:44.289512    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:44.289578    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:44.300990    4369 logs.go:276] 0 containers: []
	W0806 00:56:44.301002    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:44.301059    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:44.311410    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:44.311430    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:44.311436    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:44.325158    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:44.325170    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:44.337753    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:44.337763    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:44.357114    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:44.357129    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:44.368441    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:44.368451    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:44.385435    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:44.385446    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:44.397032    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:44.397044    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:44.431244    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:44.431261    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:44.447009    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:44.447020    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:44.485683    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:44.485700    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:44.500812    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:44.500824    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:44.513884    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:44.513898    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:44.540374    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:44.540396    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:44.545436    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:44.545447    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:44.557532    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:44.557544    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:44.575447    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:44.575462    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:44.587732    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:44.587746    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:47.103043    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:52.105478    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:52.105843    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:52.134841    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:52.134968    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:52.154843    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:52.154942    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:52.168580    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:52.168655    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:52.180222    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:52.180293    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:52.192946    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:52.193011    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:52.204395    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:52.204467    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:52.214572    4369 logs.go:276] 0 containers: []
	W0806 00:56:52.214584    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:52.214647    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:52.225209    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:52.225227    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:52.225232    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:52.236862    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:52.236874    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:52.248580    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:52.248595    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:52.253117    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:52.253124    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:52.265358    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:52.265371    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:52.282469    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:52.282481    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:52.296547    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:52.296557    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:52.319815    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:52.319823    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:52.334003    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:52.334013    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:52.347946    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:52.347959    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:52.373594    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:52.373607    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:52.411164    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:52.411174    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:52.445627    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:52.445639    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:52.456988    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:52.456999    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:52.467958    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:52.467970    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:52.479847    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:52.479859    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:52.491553    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:52.491565    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:55.007166    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:00.009731    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:00.010062    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:00.058446    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:57:00.058567    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:00.077781    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:57:00.077873    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:00.091902    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:57:00.091965    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:00.103773    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:57:00.103843    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:00.114687    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:57:00.114755    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:00.125479    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:57:00.125549    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:00.136538    4369 logs.go:276] 0 containers: []
	W0806 00:57:00.136549    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:00.136617    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:00.147261    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:57:00.147278    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:00.147283    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:00.152087    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:00.152094    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:00.188067    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:57:00.188083    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:57:00.202416    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:57:00.202429    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:57:00.213562    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:57:00.213573    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:57:00.225032    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:57:00.225041    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:57:00.239144    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:57:00.239155    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:57:00.251549    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:57:00.251559    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:57:00.263678    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:00.263691    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:00.301983    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:57:00.301995    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:57:00.319597    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:57:00.319612    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:57:00.331811    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:57:00.331827    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:00.343438    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:57:00.343453    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:57:00.355922    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:57:00.355931    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:57:00.367867    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:57:00.367876    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:57:00.378745    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:57:00.378757    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:57:00.396589    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:00.396603    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:02.922762    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:07.925220    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:07.925668    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:07.962389    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:57:07.962522    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:07.984167    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:57:07.984264    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:07.999088    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:57:07.999167    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:08.024829    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:57:08.024894    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:08.035542    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:57:08.035607    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:08.048869    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:57:08.048948    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:08.059090    4369 logs.go:276] 0 containers: []
	W0806 00:57:08.059109    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:08.059191    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:08.070079    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:57:08.070094    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:57:08.070099    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:57:08.082360    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:57:08.082373    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:57:08.094351    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:08.094361    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:08.100772    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:08.100788    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:08.142069    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:57:08.142081    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:57:08.153705    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:57:08.153716    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:57:08.165551    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:08.165563    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:08.188266    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:57:08.188275    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:08.199950    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:08.199961    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:08.238109    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:57:08.238122    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:57:08.251440    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:57:08.251460    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:57:08.269129    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:57:08.269139    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:57:08.280734    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:57:08.280748    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:57:08.294961    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:57:08.294973    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:57:08.313262    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:57:08.313274    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:57:08.326408    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:57:08.326421    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:57:08.340249    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:57:08.340261    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:57:10.852380    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:15.854846    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:15.855078    4369 kubeadm.go:597] duration metric: took 4m4.351931917s to restartPrimaryControlPlane
	W0806 00:57:15.855225    4369 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 00:57:15.855282    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0806 00:57:16.867732    4369 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.012438792s)
	I0806 00:57:16.867794    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:57:16.872791    4369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:57:16.875566    4369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:57:16.878893    4369 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:57:16.878900    4369 kubeadm.go:157] found existing configuration files:
	
	I0806 00:57:16.878921    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/admin.conf
	I0806 00:57:16.881618    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:57:16.881646    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:57:16.884100    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/kubelet.conf
	I0806 00:57:16.886982    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:57:16.887005    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:57:16.890184    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/controller-manager.conf
	I0806 00:57:16.892920    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:57:16.892942    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:57:16.895603    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/scheduler.conf
	I0806 00:57:16.898628    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:57:16.898647    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:57:16.901544    4369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:57:16.923115    4369 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0806 00:57:16.923211    4369 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:57:16.978219    4369 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:57:16.978271    4369 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:57:16.978329    4369 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:57:17.027251    4369 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:57:17.035402    4369 out.go:204]   - Generating certificates and keys ...
	I0806 00:57:17.035434    4369 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:57:17.035470    4369 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:57:17.035513    4369 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 00:57:17.035555    4369 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 00:57:17.035595    4369 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 00:57:17.035625    4369 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 00:57:17.035663    4369 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 00:57:17.035698    4369 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 00:57:17.035736    4369 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 00:57:17.035772    4369 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 00:57:17.035793    4369 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 00:57:17.035830    4369 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:57:17.125794    4369 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:57:17.355303    4369 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:57:17.468028    4369 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:57:17.715656    4369 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:57:17.744456    4369 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:57:17.744824    4369 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:57:17.744865    4369 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:57:17.830953    4369 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:57:17.836396    4369 out.go:204]   - Booting up control plane ...
	I0806 00:57:17.836447    4369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:57:17.836490    4369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:57:17.836525    4369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:57:17.836588    4369 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:57:17.836670    4369 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 00:57:22.331553    4369 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503143 seconds
	I0806 00:57:22.331701    4369 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:57:22.335322    4369 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:57:22.856119    4369 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:57:22.856570    4369 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-217000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:57:23.362305    4369 kubeadm.go:310] [bootstrap-token] Using token: x5n1wz.pdapcdyzofrirx45
	I0806 00:57:23.366542    4369 out.go:204]   - Configuring RBAC rules ...
	I0806 00:57:23.366631    4369 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:57:23.369009    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:57:23.371469    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:57:23.372683    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:57:23.373837    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:57:23.375003    4369 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:57:23.378701    4369 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:57:23.562202    4369 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:57:23.771848    4369 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:57:23.772383    4369 kubeadm.go:310] 
	I0806 00:57:23.772423    4369 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:57:23.772426    4369 kubeadm.go:310] 
	I0806 00:57:23.772467    4369 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:57:23.772470    4369 kubeadm.go:310] 
	I0806 00:57:23.772482    4369 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:57:23.772513    4369 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:57:23.772539    4369 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:57:23.772542    4369 kubeadm.go:310] 
	I0806 00:57:23.772570    4369 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:57:23.772573    4369 kubeadm.go:310] 
	I0806 00:57:23.772600    4369 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:57:23.772609    4369 kubeadm.go:310] 
	I0806 00:57:23.772631    4369 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:57:23.772669    4369 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:57:23.772710    4369 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:57:23.772713    4369 kubeadm.go:310] 
	I0806 00:57:23.772751    4369 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:57:23.772800    4369 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:57:23.772825    4369 kubeadm.go:310] 
	I0806 00:57:23.772866    4369 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x5n1wz.pdapcdyzofrirx45 \
	I0806 00:57:23.772945    4369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f \
	I0806 00:57:23.772962    4369 kubeadm.go:310] 	--control-plane 
	I0806 00:57:23.772966    4369 kubeadm.go:310] 
	I0806 00:57:23.773014    4369 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:57:23.773020    4369 kubeadm.go:310] 
	I0806 00:57:23.773062    4369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x5n1wz.pdapcdyzofrirx45 \
	I0806 00:57:23.773114    4369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f 
	I0806 00:57:23.773169    4369 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:57:23.773182    4369 cni.go:84] Creating CNI manager for ""
	I0806 00:57:23.773196    4369 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:57:23.777062    4369 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 00:57:23.783999    4369 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 00:57:23.787195    4369 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 00:57:23.792808    4369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:57:23.792862    4369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:57:23.792886    4369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-217000 minikube.k8s.io/updated_at=2024_08_06T00_57_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=running-upgrade-217000 minikube.k8s.io/primary=true
	I0806 00:57:23.832217    4369 ops.go:34] apiserver oom_adj: -16
	I0806 00:57:23.832217    4369 kubeadm.go:1113] duration metric: took 39.391333ms to wait for elevateKubeSystemPrivileges
	I0806 00:57:23.832230    4369 kubeadm.go:394] duration metric: took 4m12.383212167s to StartCluster
	I0806 00:57:23.832241    4369 settings.go:142] acquiring lock: {Name:mk345cecdfb5b849013811e238a7c51cfd047298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:57:23.832323    4369 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:57:23.832719    4369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:57:23.832919    4369 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:57:23.832924    4369 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:57:23.832962    4369 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-217000"
	I0806 00:57:23.833006    4369 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-217000"
	W0806 00:57:23.833013    4369 addons.go:243] addon storage-provisioner should already be in state true
	I0806 00:57:23.833024    4369 host.go:66] Checking if "running-upgrade-217000" exists ...
	I0806 00:57:23.833005    4369 config.go:182] Loaded profile config "running-upgrade-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:57:23.833014    4369 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-217000"
	I0806 00:57:23.833048    4369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-217000"
	I0806 00:57:23.833875    4369 kapi.go:59] client config for running-upgrade-217000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eabf90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:57:23.833986    4369 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-217000"
	W0806 00:57:23.833991    4369 addons.go:243] addon default-storageclass should already be in state true
	I0806 00:57:23.833996    4369 host.go:66] Checking if "running-upgrade-217000" exists ...
	I0806 00:57:23.837059    4369 out.go:177] * Verifying Kubernetes components...
	I0806 00:57:23.837425    4369 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:57:23.841164    4369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:57:23.841172    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	I0806 00:57:23.845001    4369 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:57:23.849007    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:57:23.853077    4369 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:57:23.853084    4369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:57:23.853090    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	I0806 00:57:23.935896    4369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:57:23.941056    4369 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:57:23.941095    4369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:57:23.944843    4369 api_server.go:72] duration metric: took 111.912292ms to wait for apiserver process to appear ...
	I0806 00:57:23.944853    4369 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:57:23.944859    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:24.003585    4369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:57:24.027568    4369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:57:28.946934    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:28.946984    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:33.947338    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:33.947365    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:38.947647    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:38.947690    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:43.948109    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:43.948132    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:48.948710    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:48.948743    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:53.949467    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:53.949521    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0806 00:57:54.344308    4369 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0806 00:57:54.350342    4369 out.go:177] * Enabled addons: storage-provisioner
	I0806 00:57:54.360289    4369 addons.go:510] duration metric: took 30.527561333s for enable addons: enabled=[storage-provisioner]
	I0806 00:57:58.950446    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:58.950499    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:03.951739    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:03.951762    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:08.953208    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:08.953223    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:13.955073    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:13.955120    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:18.956072    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:18.956109    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:23.958336    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:23.958463    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:23.979496    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:23.979571    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:23.994942    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:23.995025    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:24.006702    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:24.006777    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:24.017277    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:24.017342    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:24.027897    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:24.027963    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:24.038868    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:24.038930    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:24.049334    4369 logs.go:276] 0 containers: []
	W0806 00:58:24.049345    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:24.049404    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:24.060302    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:24.060315    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:24.060322    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:24.075142    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:24.075153    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:24.087098    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:24.087109    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:24.101508    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:24.101520    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:24.113346    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:24.113359    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:24.125063    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:24.125077    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:24.162800    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:24.162811    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:24.177250    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:24.177259    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:24.189586    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:24.189597    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:24.207343    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:24.207354    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:24.219218    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:24.219229    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:24.243223    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:24.243232    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:24.277144    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:24.277156    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:26.782476    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:31.784843    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:31.785011    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:31.796470    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:31.796545    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:31.807135    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:31.807208    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:31.817817    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:31.817881    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:31.828181    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:31.828243    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:31.838684    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:31.838755    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:31.849513    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:31.849584    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:31.859756    4369 logs.go:276] 0 containers: []
	W0806 00:58:31.859772    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:31.859822    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:31.870716    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:31.870732    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:31.870738    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:31.888615    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:31.888627    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:31.911628    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:31.911639    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:31.922929    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:31.922943    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:31.927790    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:31.927799    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:31.962768    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:31.962779    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:31.977986    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:31.977996    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:31.989685    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:31.989699    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:32.005529    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:32.005539    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:32.018680    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:32.018691    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:32.052513    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:32.052526    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:32.067085    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:32.067095    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:32.078955    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:32.078965    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:34.592631    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:39.594107    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:39.594291    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:39.609698    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:39.609778    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:39.621578    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:39.621650    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:39.634077    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:39.634138    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:39.646423    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:39.646498    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:39.660774    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:39.660845    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:39.671353    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:39.671422    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:39.685403    4369 logs.go:276] 0 containers: []
	W0806 00:58:39.685414    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:39.685470    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:39.696069    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:39.696086    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:39.696091    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:39.707996    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:39.708008    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:39.742658    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:39.742671    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:39.756738    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:39.756750    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:39.768272    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:39.768283    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:39.780263    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:39.780276    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:39.795038    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:39.795049    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:39.812833    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:39.812842    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:39.824136    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:39.824147    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:39.849424    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:39.849434    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:39.882311    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:39.882320    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:39.886637    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:39.886645    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:39.902392    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:39.902405    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:42.416446    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:47.418846    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:47.419263    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:47.459216    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:47.459351    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:47.480654    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:47.480772    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:47.495856    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:47.495927    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:47.507802    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:47.507871    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:47.518408    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:47.518479    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:47.533275    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:47.533338    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:47.544027    4369 logs.go:276] 0 containers: []
	W0806 00:58:47.544039    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:47.544103    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:47.558361    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:47.558378    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:47.558383    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:47.569670    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:47.569681    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:47.581674    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:47.581688    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:47.616647    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:47.616658    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:47.653046    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:47.653059    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:47.667420    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:47.667431    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:47.685329    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:47.685340    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:47.697264    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:47.697274    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:47.712107    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:47.712118    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:47.716843    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:47.716849    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:47.728967    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:47.728977    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:47.748773    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:47.748783    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:47.773758    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:47.773777    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:50.287464    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:55.289710    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:55.289890    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:55.303272    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:55.303350    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:55.314218    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:55.314279    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:55.325265    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:55.325328    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:55.336494    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:55.336569    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:55.346477    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:55.346536    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:55.356905    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:55.356973    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:55.367122    4369 logs.go:276] 0 containers: []
	W0806 00:58:55.367140    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:55.367201    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:55.378195    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:55.378210    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:55.378215    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:55.410872    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:55.410883    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:55.426545    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:55.426558    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:55.443733    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:55.443744    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:55.462154    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:55.462166    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:55.473533    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:55.473546    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:55.477979    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:55.477987    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:55.512191    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:55.512201    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:55.526823    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:55.526835    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:55.549177    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:55.549187    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:55.562465    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:55.562478    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:55.574631    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:55.574641    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:55.598599    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:55.598609    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:58.112584    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:03.114987    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:03.115225    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:03.143041    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:03.143177    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:03.161267    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:03.161356    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:03.174978    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:03.175051    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:03.186110    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:03.186179    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:03.196641    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:03.196707    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:03.207585    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:03.207652    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:03.218581    4369 logs.go:276] 0 containers: []
	W0806 00:59:03.218593    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:03.218650    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:03.229078    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:03.229094    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:03.229099    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:03.245264    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:03.245276    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:03.256550    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:03.256559    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:03.268473    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:03.268484    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:03.302084    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:03.302092    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:03.337668    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:03.337682    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:03.349395    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:03.349409    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:03.368737    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:03.368750    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:03.387104    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:03.387115    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:03.398740    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:03.398752    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:03.423807    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:03.423817    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:03.435546    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:03.435556    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:03.440500    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:03.440509    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:05.956850    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:10.959182    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:10.959397    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:10.983861    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:10.983968    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:11.001104    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:11.001183    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:11.014273    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:11.014338    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:11.026645    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:11.026713    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:11.037387    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:11.037459    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:11.049057    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:11.049116    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:11.061630    4369 logs.go:276] 0 containers: []
	W0806 00:59:11.061643    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:11.061697    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:11.081317    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:11.081334    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:11.081339    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:11.116623    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:11.116634    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:11.131124    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:11.131135    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:11.145539    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:11.145549    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:11.158062    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:11.158073    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:11.170748    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:11.170761    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:11.194634    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:11.194642    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:11.229349    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:11.229357    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:11.233813    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:11.233822    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:11.249772    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:11.249785    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:11.269832    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:11.269841    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:11.283904    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:11.283915    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:11.301511    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:11.301521    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:13.813977    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:18.816186    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:18.816393    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:18.832337    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:18.832419    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:18.844279    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:18.844348    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:18.855455    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:18.855516    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:18.866198    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:18.866303    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:18.876354    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:18.876433    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:18.886526    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:18.886598    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:18.898076    4369 logs.go:276] 0 containers: []
	W0806 00:59:18.898087    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:18.898143    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:18.912724    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:18.912740    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:18.912745    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:18.927317    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:18.927327    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:18.939085    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:18.939096    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:18.958084    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:18.958096    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:18.969811    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:18.969824    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:18.993044    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:18.993065    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:19.026195    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:19.026203    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:19.041502    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:19.041514    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:19.055348    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:19.055360    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:19.079359    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:19.079372    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:19.091778    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:19.091794    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:19.096565    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:19.096572    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:19.130255    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:19.130266    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:21.644237    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:26.646813    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:26.647134    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:26.677964    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:26.678101    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:26.698114    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:26.698211    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:26.712360    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:26.712440    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:26.724272    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:26.724339    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:26.736004    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:26.736067    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:26.746634    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:26.746701    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:26.756654    4369 logs.go:276] 0 containers: []
	W0806 00:59:26.756666    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:26.756732    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:26.770929    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:26.770948    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:26.770953    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:26.784173    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:26.784185    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:26.788713    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:26.788721    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:26.824286    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:26.824300    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:26.838525    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:26.838536    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:26.849889    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:26.849900    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:26.862258    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:26.862269    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:26.876657    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:26.876669    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:26.888602    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:26.888613    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:26.906335    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:26.906346    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:26.941507    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:26.941528    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:26.955776    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:26.955786    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:26.982021    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:26.982036    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:29.497344    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:34.499392    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:34.499638    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:34.523634    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:34.523753    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:34.540243    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:34.540313    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:34.553434    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:34.553506    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:34.571313    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:34.571378    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:34.582085    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:34.582146    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:34.593007    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:34.593077    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:34.603449    4369 logs.go:276] 0 containers: []
	W0806 00:59:34.603464    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:34.603524    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:34.613603    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:34.613619    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:34.613624    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:34.631535    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:34.631550    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:34.655700    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:34.655710    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:34.667097    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:34.667107    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:34.703513    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:34.703524    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:34.718089    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:34.718103    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:34.732242    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:34.732251    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:34.743640    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:34.743665    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:34.758145    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:34.758157    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:34.791333    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:34.791340    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:34.795805    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:34.795814    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:34.807426    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:34.807437    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:34.819278    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:34.819289    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:37.333126    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:42.335433    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:42.335639    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:42.351120    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:42.351200    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:42.362981    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:42.363049    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:42.373385    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:42.373456    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:42.384097    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:42.384167    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:42.394781    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:42.394850    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:42.405030    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:42.405094    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:42.414944    4369 logs.go:276] 0 containers: []
	W0806 00:59:42.414957    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:42.415025    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:42.426574    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:42.426594    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:42.426600    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:42.442132    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:42.442141    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:42.454006    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:42.454017    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:42.488556    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:42.488563    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:42.524410    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:42.524421    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:42.541935    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:42.541946    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:42.555551    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:42.555564    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:42.567421    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:42.567432    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:42.582006    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:42.582017    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:42.596317    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 00:59:42.596329    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 00:59:42.608026    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:42.608039    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:42.633448    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:42.633455    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:42.644753    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:42.644765    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:42.649248    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 00:59:42.649258    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 00:59:42.660357    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:42.660367    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:45.174303    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:50.176904    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:50.177131    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:50.197796    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:50.197909    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:50.213727    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:50.213794    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:50.225551    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:50.225624    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:50.236380    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:50.236450    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:50.247111    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:50.247175    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:50.258193    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:50.258259    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:50.267942    4369 logs.go:276] 0 containers: []
	W0806 00:59:50.267957    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:50.268009    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:50.278825    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:50.278843    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 00:59:50.278849    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 00:59:50.290343    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:50.290354    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:50.302033    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:50.302043    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:50.323066    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:50.323080    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:50.335055    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:50.335068    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:50.339967    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:50.339976    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:50.351795    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:50.351804    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:50.377213    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:50.377222    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:50.411054    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:50.411061    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:50.446798    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:50.446812    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:50.462127    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 00:59:50.462141    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 00:59:50.473448    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:50.473458    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:50.485615    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:50.485629    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:50.499441    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:50.499451    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:50.515904    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:50.515913    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:53.029815    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:58.030665    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:58.030849    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:58.043768    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:58.043838    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:58.054977    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:58.055048    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:58.070188    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:58.070261    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:58.080812    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:58.080877    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:58.091460    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:58.091518    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:58.102569    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:58.102634    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:58.112750    4369 logs.go:276] 0 containers: []
	W0806 00:59:58.112761    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:58.112817    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:58.133501    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:58.133521    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:58.133527    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:58.168381    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:58.168392    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:58.180322    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:58.180333    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:58.192400    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:58.192410    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:58.204464    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:58.204474    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:58.222001    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:58.222012    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:58.234025    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:58.234035    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:58.238669    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:58.238675    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:58.252731    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:58.252745    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:58.277754    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:58.277762    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:58.289180    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:58.289191    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:58.304186    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:58.304200    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:58.337838    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:58.337848    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:58.352181    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 00:59:58.352191    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 00:59:58.364676    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 00:59:58.364687    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:00.878025    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:05.880337    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:05.880530    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:05.895650    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:05.895726    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:05.907843    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:05.907913    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:05.918686    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:05.918761    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:05.929083    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:05.929152    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:05.939621    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:05.939685    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:05.950328    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:05.950392    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:05.961591    4369 logs.go:276] 0 containers: []
	W0806 01:00:05.961602    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:05.961663    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:05.972599    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:05.972618    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:05.972624    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:05.983867    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:05.983881    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:05.995671    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:05.995682    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:06.013530    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:06.013543    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:06.017948    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:06.017957    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:06.041109    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:06.041119    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:06.053027    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:06.053039    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:06.067041    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:06.067050    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:06.078859    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:06.078873    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:06.093251    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:06.093260    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:06.127439    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:06.127451    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:06.139493    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:06.139506    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:06.154660    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:06.154676    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:06.166188    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:06.166202    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:06.178199    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:06.178210    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:08.715487    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:13.717702    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:13.717920    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:13.735203    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:13.735296    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:13.749397    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:13.749475    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:13.760842    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:13.760916    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:13.771782    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:13.771854    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:13.785616    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:13.785689    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:13.796117    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:13.796175    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:13.806568    4369 logs.go:276] 0 containers: []
	W0806 01:00:13.806583    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:13.806645    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:13.816931    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:13.816954    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:13.816959    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:13.856488    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:13.856499    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:13.874799    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:13.874809    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:13.909688    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:13.909698    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:13.923373    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:13.923383    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:13.936116    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:13.936127    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:13.951297    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:13.951307    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:13.977472    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:13.977482    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:13.996471    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:13.996483    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:14.008341    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:14.008353    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:14.020039    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:14.020049    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:14.024678    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:14.024684    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:14.036434    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:14.036448    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:14.048910    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:14.048924    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:14.063549    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:14.063562    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:16.582028    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:21.584224    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:21.584435    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:21.602285    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:21.602369    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:21.614682    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:21.614751    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:21.625144    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:21.625211    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:21.637991    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:21.638055    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:21.649085    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:21.649153    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:21.659817    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:21.659881    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:21.669920    4369 logs.go:276] 0 containers: []
	W0806 01:00:21.669931    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:21.669985    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:21.679715    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:21.679735    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:21.679742    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:21.695781    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:21.695792    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:21.710853    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:21.710868    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:21.723312    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:21.723322    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:21.728398    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:21.728404    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:21.740441    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:21.740452    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:21.757469    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:21.757480    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:21.783821    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:21.783835    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:21.818638    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:21.818646    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:21.830449    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:21.830460    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:21.844732    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:21.844743    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:21.856292    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:21.856304    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:21.890325    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:21.890337    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:21.905025    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:21.905035    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:21.917062    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:21.917074    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:24.429577    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:29.431905    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:29.432027    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:29.443489    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:29.443569    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:29.454274    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:29.454349    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:29.465429    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:29.465499    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:29.475764    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:29.475834    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:29.485983    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:29.486052    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:29.496719    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:29.496785    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:29.507400    4369 logs.go:276] 0 containers: []
	W0806 01:00:29.507409    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:29.507462    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:29.518380    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:29.518397    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:29.518402    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:29.530135    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:29.530148    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:29.541646    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:29.541658    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:29.556004    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:29.556017    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:29.574481    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:29.574492    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:29.596881    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:29.596894    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:29.601321    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:29.601328    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:29.634975    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:29.634990    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:29.650509    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:29.650523    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:29.661972    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:29.661986    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:29.673656    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:29.673669    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:29.690874    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:29.690884    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:29.703171    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:29.703183    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:29.737102    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:29.737110    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:29.749032    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:29.749045    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:32.276320    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:37.277721    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:37.277927    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:37.305970    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:37.306082    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:37.320741    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:37.320813    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:37.334781    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:37.334848    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:37.349394    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:37.349457    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:37.360025    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:37.360102    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:37.370829    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:37.370896    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:37.381164    4369 logs.go:276] 0 containers: []
	W0806 01:00:37.381176    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:37.381230    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:37.391369    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:37.391389    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:37.391395    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:37.426190    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:37.426199    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:37.440649    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:37.440662    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:37.452966    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:37.452977    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:37.464904    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:37.464915    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:37.481731    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:37.481742    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:37.505148    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:37.505155    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:37.509306    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:37.509315    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:37.544428    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:37.544439    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:37.568999    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:37.569010    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:37.581633    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:37.581646    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:37.594513    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:37.594524    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:37.609563    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:37.609573    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:37.621594    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:37.621606    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:37.645125    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:37.645138    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:40.159144    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:45.161500    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:45.161633    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:45.176294    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:45.176368    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:45.188243    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:45.188313    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:45.198867    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:45.198937    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:45.210298    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:45.210370    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:45.222100    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:45.222164    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:45.232780    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:45.232848    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:45.243304    4369 logs.go:276] 0 containers: []
	W0806 01:00:45.243317    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:45.243378    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:45.253654    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:45.253673    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:45.253679    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:45.288990    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:45.289001    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:45.307060    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:45.307070    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:45.319026    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:45.319038    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:45.337898    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:45.337909    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:45.352163    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:45.352178    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:45.364381    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:45.364395    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:45.379323    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:45.379334    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:45.391162    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:45.391173    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:45.424970    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:45.424982    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:45.436935    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:45.436946    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:45.461575    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:45.461585    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:45.465855    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:45.465864    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:45.477412    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:45.477426    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:45.488766    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:45.488777    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:48.003207    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:53.005430    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:53.005611    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:53.016542    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:53.016614    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:53.027053    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:53.027123    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:53.039440    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:53.039513    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:53.054568    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:53.054634    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:53.065062    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:53.065117    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:53.076090    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:53.076158    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:53.087207    4369 logs.go:276] 0 containers: []
	W0806 01:00:53.087225    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:53.087281    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:53.097622    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:53.097639    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:53.097643    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:53.132559    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:53.132570    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:53.144847    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:53.144859    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:53.181110    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:53.181123    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:53.193975    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:53.193989    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:53.211537    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:53.211547    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:53.227894    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:53.227906    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:53.239776    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:53.239786    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:53.264271    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:53.264279    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:53.268595    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:53.268603    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:53.282901    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:53.282911    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:53.296840    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:53.296850    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:53.309023    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:53.309035    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:53.325899    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:53.325912    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:53.342461    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:53.342477    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:55.856025    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:00.857685    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:00.857855    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:01:00.879459    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:01:00.879553    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:01:00.894804    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:01:00.894882    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:01:00.907866    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:01:00.907942    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:01:00.922507    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:01:00.922573    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:01:00.937834    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:01:00.937905    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:01:00.949240    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:01:00.949308    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:01:00.963537    4369 logs.go:276] 0 containers: []
	W0806 01:01:00.963549    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:01:00.963608    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:01:00.973896    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:01:00.973914    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:01:00.973919    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:01:00.988087    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:01:00.988101    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:01:00.999760    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:01:00.999772    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:01:01.013027    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:01:01.013037    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:01:01.027829    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:01:01.027840    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:01:01.053641    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:01:01.053664    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:01:01.079162    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:01:01.079174    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:01:01.116379    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:01:01.116396    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:01:01.130665    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:01:01.130682    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:01:01.142934    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:01:01.142946    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:01:01.155126    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:01:01.155137    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:01:01.172845    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:01:01.172857    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:01:01.184541    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:01:01.184551    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:01:01.189079    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:01:01.189086    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:01:01.225201    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:01:01.225212    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:01:03.738906    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:08.741185    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:08.741271    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:01:08.752609    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:01:08.752682    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:01:08.763157    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:01:08.763228    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:01:08.774050    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:01:08.774118    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:01:08.785210    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:01:08.785281    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:01:08.796064    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:01:08.796131    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:01:08.807702    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:01:08.807773    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:01:08.818398    4369 logs.go:276] 0 containers: []
	W0806 01:01:08.818410    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:01:08.818464    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:01:08.829403    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:01:08.829421    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:01:08.829426    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:01:08.848544    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:01:08.848554    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:01:08.860830    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:01:08.860841    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:01:08.872555    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:01:08.872569    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:01:08.912434    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:01:08.912449    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:01:08.924236    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:01:08.924248    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:01:08.936327    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:01:08.936338    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:01:08.948676    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:01:08.948700    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:01:08.963763    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:01:08.963773    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:01:08.981599    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:01:08.981609    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:01:09.006156    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:01:09.006173    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:01:09.020636    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:01:09.020646    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:01:09.032285    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:01:09.032296    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:01:09.066153    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:01:09.066163    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:01:09.079593    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:01:09.079604    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:01:11.586713    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:16.588111    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:16.588309    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:01:16.614846    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:01:16.614947    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:01:16.631089    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:01:16.631164    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:01:16.644294    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:01:16.644377    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:01:16.655445    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:01:16.655516    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:01:16.665696    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:01:16.665765    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:01:16.676082    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:01:16.676149    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:01:16.686755    4369 logs.go:276] 0 containers: []
	W0806 01:01:16.686767    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:01:16.686828    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:01:16.697121    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:01:16.697142    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:01:16.697147    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:01:16.719751    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:01:16.719759    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:01:16.724638    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:01:16.724646    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:01:16.738937    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:01:16.738947    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:01:16.750718    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:01:16.750730    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:01:16.770981    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:01:16.770996    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:01:16.788220    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:01:16.788230    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:01:16.821461    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:01:16.821470    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:01:16.836034    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:01:16.836044    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:01:16.847727    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:01:16.847738    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:01:16.860092    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:01:16.860103    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:01:16.874931    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:01:16.874945    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:01:16.886221    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:01:16.886231    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:01:16.897911    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:01:16.897921    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:01:16.935243    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:01:16.935254    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:01:19.451611    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:24.453552    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:24.458247    4369 out.go:177] 
	W0806 01:01:24.462153    4369 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0806 01:01:24.462162    4369 out.go:239] * 
	* 
	W0806 01:01:24.462900    4369 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:01:24.473122    4369 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-217000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-06 01:01:24.572565 -0700 PDT m=+3431.877851584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-217000 -n running-upgrade-217000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-217000 -n running-upgrade-217000: exit status 2 (15.714121s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-217000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-958000          | force-systemd-flag-958000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-873000              | force-systemd-env-873000  | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-873000           | force-systemd-env-873000  | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	| start   | -p docker-flags-657000                | docker-flags-657000       | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-958000             | force-systemd-flag-958000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-958000          | force-systemd-flag-958000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	| start   | -p cert-expiration-730000             | cert-expiration-730000    | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-657000 ssh               | docker-flags-657000       | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-657000 ssh               | docker-flags-657000       | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-657000                | docker-flags-657000       | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	| start   | -p cert-options-780000                | cert-options-780000       | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-780000 ssh               | cert-options-780000       | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-780000 -- sudo        | cert-options-780000       | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-780000                | cert-options-780000       | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT | 06 Aug 24 00:52 PDT |
	| start   | -p running-upgrade-217000             | minikube                  | jenkins | v1.26.0 | 06 Aug 24 00:52 PDT | 06 Aug 24 00:53 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-217000             | running-upgrade-217000    | jenkins | v1.33.1 | 06 Aug 24 00:53 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-730000             | cert-expiration-730000    | jenkins | v1.33.1 | 06 Aug 24 00:54 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-730000             | cert-expiration-730000    | jenkins | v1.33.1 | 06 Aug 24 00:55 PDT | 06 Aug 24 00:55 PDT |
	| start   | -p kubernetes-upgrade-400000          | kubernetes-upgrade-400000 | jenkins | v1.33.1 | 06 Aug 24 00:55 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-400000          | kubernetes-upgrade-400000 | jenkins | v1.33.1 | 06 Aug 24 00:55 PDT | 06 Aug 24 00:55 PDT |
	| start   | -p kubernetes-upgrade-400000          | kubernetes-upgrade-400000 | jenkins | v1.33.1 | 06 Aug 24 00:55 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-400000          | kubernetes-upgrade-400000 | jenkins | v1.33.1 | 06 Aug 24 00:55 PDT | 06 Aug 24 00:55 PDT |
	| start   | -p stopped-upgrade-180000             | minikube                  | jenkins | v1.26.0 | 06 Aug 24 00:55 PDT | 06 Aug 24 00:56 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-180000 stop           | minikube                  | jenkins | v1.26.0 | 06 Aug 24 00:56 PDT | 06 Aug 24 00:56 PDT |
	| start   | -p stopped-upgrade-180000             | stopped-upgrade-180000    | jenkins | v1.33.1 | 06 Aug 24 00:56 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:56:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:56:15.218468    4539 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:56:15.218653    4539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:56:15.218658    4539 out.go:304] Setting ErrFile to fd 2...
	I0806 00:56:15.218661    4539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:56:15.218812    4539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:56:15.219965    4539 out.go:298] Setting JSON to false
	I0806 00:56:15.239619    4539 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3343,"bootTime":1722927632,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:56:15.239698    4539 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:56:15.244024    4539 out.go:177] * [stopped-upgrade-180000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:56:15.250894    4539 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:56:15.250955    4539 notify.go:220] Checking for updates...
	I0806 00:56:15.258994    4539 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:56:15.262129    4539 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:56:15.265007    4539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:56:15.267975    4539 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:56:15.271045    4539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:56:15.274236    4539 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:56:15.277935    4539 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 00:56:15.280927    4539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:56:15.284956    4539 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:56:15.290897    4539 start.go:297] selected driver: qemu2
	I0806 00:56:15.290903    4539 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:56:15.290964    4539 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:56:15.293626    4539 cni.go:84] Creating CNI manager for ""
	I0806 00:56:15.293645    4539 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:56:15.293668    4539 start.go:340] cluster config:
	{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:56:15.293717    4539 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:56:15.300973    4539 out.go:177] * Starting "stopped-upgrade-180000" primary control-plane node in "stopped-upgrade-180000" cluster
	I0806 00:56:15.304959    4539 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0806 00:56:15.304977    4539 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0806 00:56:15.304986    4539 cache.go:56] Caching tarball of preloaded images
	I0806 00:56:15.305047    4539 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:56:15.305052    4539 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0806 00:56:15.305110    4539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0806 00:56:15.305621    4539 start.go:360] acquireMachinesLock for stopped-upgrade-180000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:56:15.305651    4539 start.go:364] duration metric: took 23.791µs to acquireMachinesLock for "stopped-upgrade-180000"
	I0806 00:56:15.305659    4539 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:56:15.305665    4539 fix.go:54] fixHost starting: 
	I0806 00:56:15.305775    4539 fix.go:112] recreateIfNeeded on stopped-upgrade-180000: state=Stopped err=<nil>
	W0806 00:56:15.305783    4539 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:56:15.313974    4539 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-180000" ...
	I0806 00:56:12.518915    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:12.519329    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:12.558555    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:12.558693    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:12.589740    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:12.589828    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:12.613721    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:12.613786    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:12.631193    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:12.631257    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:12.641891    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:12.641951    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:12.652571    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:12.652629    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:12.663845    4369 logs.go:276] 0 containers: []
	W0806 00:56:12.663861    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:12.663920    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:12.674870    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:12.674887    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:12.674894    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:12.686479    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:12.686490    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:12.699820    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:12.699832    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:12.711666    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:12.711677    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:12.723607    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:12.723619    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:12.741188    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:12.741204    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:12.753485    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:12.753496    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:12.778010    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:12.778024    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:12.816879    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:12.816890    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:12.831170    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:12.831182    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:12.851758    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:12.851769    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:12.863264    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:12.863276    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:12.888243    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:12.888252    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:12.892434    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:12.892440    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:12.927216    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:12.927225    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:12.943095    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:12.943104    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:12.957933    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:12.957949    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:15.477212    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:15.318007    4539 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:56:15.318069    4539 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50451-:22,hostfwd=tcp::50452-:2376,hostname=stopped-upgrade-180000 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/disk.qcow2
	I0806 00:56:15.365426    4539 main.go:141] libmachine: STDOUT: 
	I0806 00:56:15.365455    4539 main.go:141] libmachine: STDERR: 
	I0806 00:56:15.365460    4539 main.go:141] libmachine: Waiting for VM to start (ssh -p 50451 docker@127.0.0.1)...
	I0806 00:56:20.479516    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:20.479970    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:20.521422    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:20.521550    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:20.544102    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:20.544209    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:20.559229    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:20.559307    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:20.572366    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:20.572446    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:20.583542    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:20.583604    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:20.594466    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:20.594534    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:20.606221    4369 logs.go:276] 0 containers: []
	W0806 00:56:20.606232    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:20.606290    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:20.617783    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:20.617803    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:20.617809    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:20.657404    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:20.657419    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:20.676084    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:20.676093    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:20.688283    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:20.688293    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:20.700363    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:20.700377    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:20.715004    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:20.715015    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:20.729839    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:20.729851    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:20.742838    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:20.742850    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:20.748233    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:20.748241    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:20.785603    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:20.785614    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:20.800299    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:20.800309    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:20.812160    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:20.812175    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:20.831914    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:20.831928    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:20.844758    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:20.844768    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:20.856874    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:20.856886    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:20.869203    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:20.869214    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:20.880764    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:20.880775    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:23.407424    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:28.409778    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:28.410209    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:28.444695    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:28.444823    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:28.465874    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:28.465986    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:28.481076    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:28.481158    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:28.493463    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:28.493537    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:28.504899    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:28.504974    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:28.519692    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:28.519766    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:28.534470    4369 logs.go:276] 0 containers: []
	W0806 00:56:28.534479    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:28.534531    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:28.545348    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:28.545367    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:28.545372    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:28.558125    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:28.558138    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:28.570595    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:28.570609    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:28.587801    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:28.587813    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:28.599483    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:28.599499    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:28.611398    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:28.611412    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:28.615908    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:28.615915    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:28.650021    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:28.650032    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:28.676399    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:28.676411    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:28.693848    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:28.693858    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:28.714436    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:28.714448    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:28.726784    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:28.726796    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:28.737987    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:28.737997    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:28.774124    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:28.774135    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:28.788460    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:28.788473    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:28.800082    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:28.800097    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:28.815649    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:28.815661    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:34.941183    4539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0806 00:56:34.941762    4539 machine.go:94] provisionDockerMachine start ...
	I0806 00:56:34.941874    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:34.942190    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:34.942201    4539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:56:35.014364    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:56:35.014389    4539 buildroot.go:166] provisioning hostname "stopped-upgrade-180000"
	I0806 00:56:35.014460    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.014650    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.014661    4539 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-180000 && echo "stopped-upgrade-180000" | sudo tee /etc/hostname
	I0806 00:56:35.081315    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-180000
	
	I0806 00:56:35.081374    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.081501    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.081510    4539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-180000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-180000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-180000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:56:35.143539    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:56:35.143550    4539 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-965/.minikube}
	I0806 00:56:35.143560    4539 buildroot.go:174] setting up certificates
	I0806 00:56:35.143565    4539 provision.go:84] configureAuth start
	I0806 00:56:35.143571    4539 provision.go:143] copyHostCerts
	I0806 00:56:35.143630    4539 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem, removing ...
	I0806 00:56:35.143637    4539 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem
	I0806 00:56:35.143743    4539 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem (1123 bytes)
	I0806 00:56:35.143937    4539 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem, removing ...
	I0806 00:56:35.143940    4539 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem
	I0806 00:56:35.143986    4539 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem (1675 bytes)
	I0806 00:56:35.144102    4539 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem, removing ...
	I0806 00:56:35.144105    4539 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem
	I0806 00:56:35.144163    4539 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem (1082 bytes)
	I0806 00:56:35.144263    4539 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-180000 san=[127.0.0.1 localhost minikube stopped-upgrade-180000]
	I0806 00:56:31.343412    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:35.259457    4539 provision.go:177] copyRemoteCerts
	I0806 00:56:35.259496    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:56:35.259503    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 00:56:35.290923    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:56:35.297640    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 00:56:35.304198    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 00:56:35.311282    4539 provision.go:87] duration metric: took 167.712667ms to configureAuth
	I0806 00:56:35.311293    4539 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:56:35.311412    4539 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:56:35.311448    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.311536    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.311541    4539 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:56:35.369566    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:56:35.369577    4539 buildroot.go:70] root file system type: tmpfs
	I0806 00:56:35.369623    4539 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:56:35.369670    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.369790    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.369822    4539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:56:35.432225    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:56:35.432273    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.432387    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.432398    4539 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:56:35.771684    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:56:35.771697    4539 machine.go:97] duration metric: took 829.931458ms to provisionDockerMachine
	I0806 00:56:35.771704    4539 start.go:293] postStartSetup for "stopped-upgrade-180000" (driver="qemu2")
	I0806 00:56:35.771711    4539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:56:35.771762    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:56:35.771771    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 00:56:35.805514    4539 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:56:35.806775    4539 info.go:137] Remote host: Buildroot 2021.02.12
	I0806 00:56:35.806784    4539 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/addons for local assets ...
	I0806 00:56:35.806876    4539 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/files for local assets ...
	I0806 00:56:35.806969    4539 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem -> 14552.pem in /etc/ssl/certs
	I0806 00:56:35.807063    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:56:35.809569    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:56:35.816298    4539 start.go:296] duration metric: took 44.59ms for postStartSetup
	I0806 00:56:35.816312    4539 fix.go:56] duration metric: took 20.510780875s for fixHost
	I0806 00:56:35.816343    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.816447    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.816453    4539 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:56:35.877596    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722930995.749809796
	
	I0806 00:56:35.877606    4539 fix.go:216] guest clock: 1722930995.749809796
	I0806 00:56:35.877610    4539 fix.go:229] Guest: 2024-08-06 00:56:35.749809796 -0700 PDT Remote: 2024-08-06 00:56:35.816313 -0700 PDT m=+20.629547251 (delta=-66.503204ms)
	I0806 00:56:35.877621    4539 fix.go:200] guest clock delta is within tolerance: -66.503204ms
	I0806 00:56:35.877624    4539 start.go:83] releasing machines lock for "stopped-upgrade-180000", held for 20.572101708s
	I0806 00:56:35.877689    4539 ssh_runner.go:195] Run: cat /version.json
	I0806 00:56:35.877696    4539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:56:35.877697    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 00:56:35.877714    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	W0806 00:56:35.878265    4539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50578->127.0.0.1:50451: write: broken pipe
	I0806 00:56:35.878282    4539 retry.go:31] will retry after 344.28547ms: ssh: handshake failed: write tcp 127.0.0.1:50578->127.0.0.1:50451: write: broken pipe
	W0806 00:56:35.909050    4539 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0806 00:56:35.909097    4539 ssh_runner.go:195] Run: systemctl --version
	I0806 00:56:35.910780    4539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:56:35.912428    4539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:56:35.912457    4539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0806 00:56:35.915840    4539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0806 00:56:35.920765    4539 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:56:35.920783    4539 start.go:495] detecting cgroup driver to use...
	I0806 00:56:35.920855    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:56:35.927806    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0806 00:56:35.931351    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:56:35.934365    4539 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:56:35.934392    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:56:35.937113    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:56:35.940237    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:56:35.943762    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:56:35.947051    4539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:56:35.949880    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:56:35.952804    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:56:35.956040    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:56:35.959311    4539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:56:35.962189    4539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:56:35.964829    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:36.044875    4539 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:56:36.051051    4539 start.go:495] detecting cgroup driver to use...
	I0806 00:56:36.051108    4539 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:56:36.057568    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:56:36.062855    4539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:56:36.069097    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:56:36.074136    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:56:36.078567    4539 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:56:36.127722    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:56:36.133151    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:56:36.138885    4539 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:56:36.140148    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:56:36.143085    4539 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:56:36.147985    4539 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:56:36.210905    4539 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:56:36.273120    4539 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:56:36.273181    4539 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:56:36.278240    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:36.350446    4539 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:56:37.502489    4539 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152033083s)
	I0806 00:56:37.502543    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:56:37.508381    4539 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:56:37.515808    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:56:37.520451    4539 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:56:37.582733    4539 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:56:37.646368    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:37.710183    4539 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:56:37.715644    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:56:37.720153    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:37.787631    4539 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:56:37.827003    4539 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:56:37.827080    4539 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:56:37.829221    4539 start.go:563] Will wait 60s for crictl version
	I0806 00:56:37.829270    4539 ssh_runner.go:195] Run: which crictl
	I0806 00:56:37.831020    4539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:56:37.845793    4539 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0806 00:56:37.845864    4539 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:56:37.861564    4539 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:56:37.883814    4539 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0806 00:56:37.883892    4539 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0806 00:56:37.885210    4539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:56:37.888666    4539 kubeadm.go:883] updating cluster {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0806 00:56:37.888716    4539 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0806 00:56:37.888756    4539 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:56:37.899597    4539 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:56:37.899606    4539 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0806 00:56:37.899657    4539 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:56:37.903218    4539 ssh_runner.go:195] Run: which lz4
	I0806 00:56:37.904463    4539 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:56:37.905793    4539 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:56:37.905802    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0806 00:56:38.874580    4539 docker.go:649] duration metric: took 970.153792ms to copy over tarball
	I0806 00:56:38.874645    4539 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:56:40.034418    4539 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159767458s)
	I0806 00:56:40.034432    4539 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:56:40.050565    4539 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:56:40.053849    4539 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0806 00:56:40.058904    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:40.127581    4539 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:56:36.346141    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:36.346232    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:36.358306    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:36.358379    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:36.373320    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:36.373392    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:36.383855    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:36.383927    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:36.394919    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:36.394993    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:36.405942    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:36.406011    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:36.416787    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:36.416854    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:36.427240    4369 logs.go:276] 0 containers: []
	W0806 00:56:36.427252    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:36.427310    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:36.440805    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:36.440823    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:36.440829    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:36.455402    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:36.455416    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:36.473297    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:36.473311    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:36.484436    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:36.484449    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:36.498570    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:36.498584    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:36.533200    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:36.533215    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:36.545841    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:36.545854    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:36.569221    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:36.569230    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:36.605092    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:36.605100    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:36.609241    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:36.609250    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:36.623004    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:36.623015    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:36.635063    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:36.635079    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:36.646907    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:36.646917    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:36.661789    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:36.661800    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:36.673036    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:36.673049    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:36.690546    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:36.690557    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:36.706492    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:36.706504    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:39.220336    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:41.702267    4539 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.574679084s)
	I0806 00:56:41.702376    4539 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:56:41.716733    4539 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:56:41.716742    4539 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0806 00:56:41.716753    4539 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 00:56:41.721522    4539 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:41.723094    4539 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:41.724403    4539 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:41.724411    4539 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:41.725915    4539 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:41.726000    4539 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:41.727232    4539 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0806 00:56:41.727198    4539 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:41.728005    4539 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:41.729077    4539 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:41.729607    4539 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:41.730142    4539 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0806 00:56:41.730967    4539 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:41.730991    4539 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:41.731820    4539 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:41.732471    4539 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.046339    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:42.056509    4539 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0806 00:56:42.056538    4539 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:42.056590    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:42.067164    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0806 00:56:42.082225    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:42.092065    4539 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0806 00:56:42.092090    4539 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:42.092136    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:42.102077    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0806 00:56:42.105270    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:42.114920    4539 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0806 00:56:42.114940    4539 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:42.114991    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:42.124776    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0806 00:56:42.129686    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0806 00:56:42.140008    4539 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0806 00:56:42.140027    4539 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0806 00:56:42.140078    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0806 00:56:42.150117    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0806 00:56:42.150238    4539 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0806 00:56:42.152400    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0806 00:56:42.152411    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0806 00:56:42.159935    4539 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0806 00:56:42.159943    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0806 00:56:42.174313    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0806 00:56:42.187574    4539 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0806 00:56:42.187709    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:42.196189    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0806 00:56:42.196213    4539 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0806 00:56:42.196231    4539 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:42.196288    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:42.211399    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.212886    4539 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0806 00:56:42.212902    4539 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:42.212927    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:42.216399    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0806 00:56:42.226805    4539 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0806 00:56:42.226814    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0806 00:56:42.226822    4539 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.226865    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.226927    4539 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0806 00:56:42.236468    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0806 00:56:42.236497    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0806 00:56:42.236557    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0806 00:56:42.236651    4539 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0806 00:56:42.242859    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0806 00:56:42.242891    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0806 00:56:42.308071    4539 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0806 00:56:42.308087    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0806 00:56:42.399159    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0806 00:56:42.405274    4539 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0806 00:56:42.405391    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:42.440969    4539 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0806 00:56:42.440994    4539 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:42.441049    4539 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:42.487729    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 00:56:42.487850    4539 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0806 00:56:42.507270    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0806 00:56:42.507303    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0806 00:56:42.552648    4539 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0806 00:56:42.552664    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0806 00:56:42.698367    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0806 00:56:42.698389    4539 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 00:56:42.698397    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0806 00:56:42.931665    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 00:56:42.931703    4539 cache_images.go:92] duration metric: took 1.214951s to LoadCachedImages
	W0806 00:56:42.931743    4539 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0806 00:56:42.931749    4539 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0806 00:56:42.931805    4539 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-180000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:56:42.931873    4539 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:56:42.947330    4539 cni.go:84] Creating CNI manager for ""
	I0806 00:56:42.947344    4539 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:56:42.947348    4539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:56:42.947356    4539 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-180000 NodeName:stopped-upgrade-180000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:56:42.947425    4539 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-180000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:56:42.947492    4539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0806 00:56:42.951095    4539 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:56:42.951136    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:56:42.953992    4539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0806 00:56:42.958503    4539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:56:42.963648    4539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0806 00:56:42.968966    4539 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0806 00:56:42.970236    4539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:56:42.973627    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:43.030692    4539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:56:43.037781    4539 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000 for IP: 10.0.2.15
	I0806 00:56:43.037793    4539 certs.go:194] generating shared ca certs ...
	I0806 00:56:43.037804    4539 certs.go:226] acquiring lock for ca certs: {Name:mkb2ca998ea1a45f9f580d4d76a58064c889c60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.037990    4539 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key
	I0806 00:56:43.038025    4539 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key
	I0806 00:56:43.038030    4539 certs.go:256] generating profile certs ...
	I0806 00:56:43.038093    4539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.key
	I0806 00:56:43.038109    4539 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156
	I0806 00:56:43.038119    4539 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0806 00:56:43.156257    4539 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 ...
	I0806 00:56:43.156270    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156: {Name:mk0f3c36402afeb1e7009d40760ebfbe8cd2bc95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.156536    4539 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 ...
	I0806 00:56:43.156541    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156: {Name:mk1b77fe9e2f52c52bcf1128eb177bf67d544f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.156666    4539 certs.go:381] copying /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 -> /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt
	I0806 00:56:43.156787    4539 certs.go:385] copying /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 -> /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key
	I0806 00:56:43.156911    4539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/proxy-client.key
	I0806 00:56:43.157038    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem (1338 bytes)
	W0806 00:56:43.157059    4539 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455_empty.pem, impossibly tiny 0 bytes
	I0806 00:56:43.157063    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:56:43.157085    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:56:43.157102    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:56:43.157119    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem (1675 bytes)
	I0806 00:56:43.157159    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:56:43.157471    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:56:43.164446    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:56:43.171454    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:56:43.178633    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:56:43.185512    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 00:56:43.192239    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 00:56:43.199675    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:56:43.207111    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 00:56:43.214109    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem --> /usr/share/ca-certificates/1455.pem (1338 bytes)
	I0806 00:56:43.220514    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1708 bytes)
	I0806 00:56:43.227669    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:56:43.235168    4539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:56:43.240485    4539 ssh_runner.go:195] Run: openssl version
	I0806 00:56:43.242508    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455.pem && ln -fs /usr/share/ca-certificates/1455.pem /etc/ssl/certs/1455.pem"
	I0806 00:56:43.245442    4539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455.pem
	I0806 00:56:43.246824    4539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:12 /usr/share/ca-certificates/1455.pem
	I0806 00:56:43.246848    4539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455.pem
	I0806 00:56:43.248772    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1455.pem /etc/ssl/certs/51391683.0"
	I0806 00:56:43.252102    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14552.pem && ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem"
	I0806 00:56:43.255365    4539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I0806 00:56:43.256833    4539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:12 /usr/share/ca-certificates/14552.pem
	I0806 00:56:43.256855    4539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I0806 00:56:43.258786    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:56:43.261904    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:56:43.264909    4539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:56:43.266492    4539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:56:43.266516    4539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:56:43.268364    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:56:43.271642    4539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:56:43.273125    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:56:43.275086    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:56:43.276933    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:56:43.278879    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:56:43.280709    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:56:43.282517    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:56:43.284392    4539 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:56:43.284460    4539 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:56:43.295241    4539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:56:43.298303    4539 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 00:56:43.298309    4539 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 00:56:43.298334    4539 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 00:56:43.301098    4539 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:56:43.301394    4539 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-180000" does not appear in /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:56:43.301493    4539 kubeconfig.go:62] /Users/jenkins/minikube-integration/19370-965/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-180000" cluster setting kubeconfig missing "stopped-upgrade-180000" context setting]
	I0806 00:56:43.301695    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.302090    4539 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040a7f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:56:43.302396    4539 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 00:56:43.305035    4539 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-180000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0806 00:56:43.305041    4539 kubeadm.go:1160] stopping kube-system containers ...
	I0806 00:56:43.305078    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:56:43.315281    4539 docker.go:483] Stopping containers: [9418470fa8b3 b9546762696e 5082f389d196 29ee1941e223 974c0bca9922 729430c6b14e f2620bcfc6ae 2d13495e1513]
	I0806 00:56:43.315361    4539 ssh_runner.go:195] Run: docker stop 9418470fa8b3 b9546762696e 5082f389d196 29ee1941e223 974c0bca9922 729430c6b14e f2620bcfc6ae 2d13495e1513
	I0806 00:56:43.326327    4539 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 00:56:43.331649    4539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:56:43.334899    4539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:56:43.334904    4539 kubeadm.go:157] found existing configuration files:
	
	I0806 00:56:43.334925    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0806 00:56:43.337537    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:56:43.337566    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:56:43.340278    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0806 00:56:43.343472    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:56:43.343495    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:56:43.346549    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0806 00:56:43.349190    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:56:43.349217    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:56:43.352304    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0806 00:56:43.355355    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:56:43.355377    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:56:43.358102    4539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:56:43.360755    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.382914    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.781533    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.888298    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.913319    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.934157    4539 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:56:43.934236    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:56:44.435242    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:56:44.936288    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:56:44.941460    4539 api_server.go:72] duration metric: took 1.007312208s to wait for apiserver process to appear ...
	I0806 00:56:44.941467    4539 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:56:44.941475    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:44.221832    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:44.221931    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:44.234559    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:44.234632    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:44.245889    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:44.245962    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:44.256404    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:44.256474    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:44.267433    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:44.267507    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:44.278481    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:44.278551    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:44.289512    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:44.289578    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:44.300990    4369 logs.go:276] 0 containers: []
	W0806 00:56:44.301002    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:44.301059    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:44.311410    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:44.311430    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:44.311436    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:44.325158    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:44.325170    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:44.337753    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:44.337763    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:44.357114    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:44.357129    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:44.368441    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:44.368451    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:44.385435    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:44.385446    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:44.397032    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:44.397044    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:44.431244    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:44.431261    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:44.447009    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:44.447020    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:44.485683    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:44.485700    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:44.500812    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:44.500824    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:44.513884    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:44.513898    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:44.540374    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:44.540396    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:44.545436    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:44.545447    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:44.557532    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:44.557544    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:44.575447    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:44.575462    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:44.587732    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:44.587746    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:49.943600    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:49.943638    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:47.103043    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:54.944040    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:54.944117    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:52.105478    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:52.105843    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:56:52.134841    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:56:52.134968    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:56:52.154843    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:56:52.154942    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:56:52.168580    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:56:52.168655    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:56:52.180222    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:56:52.180293    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:56:52.192946    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:56:52.193011    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:56:52.204395    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:56:52.204467    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:56:52.214572    4369 logs.go:276] 0 containers: []
	W0806 00:56:52.214584    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:56:52.214647    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:56:52.225209    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:56:52.225227    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:56:52.225232    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:56:52.236862    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:56:52.236874    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:56:52.248580    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:56:52.248595    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:56:52.253117    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:56:52.253124    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:56:52.265358    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:56:52.265371    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:56:52.282469    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:56:52.282481    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:56:52.296547    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:56:52.296557    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:56:52.319815    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:56:52.319823    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:56:52.334003    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:56:52.334013    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:56:52.347946    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:56:52.347959    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:56:52.373594    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:56:52.373607    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:56:52.411164    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:56:52.411174    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:56:52.445627    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:56:52.445639    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:56:52.456988    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:56:52.456999    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:56:52.467958    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:56:52.467970    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:56:52.479847    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:56:52.479859    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:56:52.491553    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:56:52.491565    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:56:55.007166    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:59.944993    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:59.945066    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:00.009731    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:00.010062    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:00.058446    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:57:00.058567    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:00.077781    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:57:00.077873    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:00.091902    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:57:00.091965    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:00.103773    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:57:00.103843    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:00.114687    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:57:00.114755    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:00.125479    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:57:00.125549    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:00.136538    4369 logs.go:276] 0 containers: []
	W0806 00:57:00.136549    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:00.136617    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:00.147261    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:57:00.147278    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:00.147283    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:00.152087    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:00.152094    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:00.188067    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:57:00.188083    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:57:00.202416    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:57:00.202429    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:57:00.213562    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:57:00.213573    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:57:00.225032    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:57:00.225041    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:57:00.239144    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:57:00.239155    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:57:00.251549    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:57:00.251559    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:57:00.263678    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:00.263691    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:00.301983    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:57:00.301995    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:57:00.319597    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:57:00.319612    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:57:00.331811    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:57:00.331827    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:00.343438    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:57:00.343453    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:57:00.355922    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:57:00.355931    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:57:00.367867    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:57:00.367876    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:57:00.378745    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:57:00.378757    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:57:00.396589    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:00.396603    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:04.946139    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:04.946231    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:02.922762    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:09.947759    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:09.947839    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:07.925220    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:07.925668    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:07.962389    4369 logs.go:276] 2 containers: [b1e6d57cf5ab 9b1a1d475261]
	I0806 00:57:07.962522    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:07.984167    4369 logs.go:276] 2 containers: [f750ebd6989d 5f751153bd2e]
	I0806 00:57:07.984264    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:07.999088    4369 logs.go:276] 1 containers: [b301c8dea344]
	I0806 00:57:07.999167    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:08.024829    4369 logs.go:276] 2 containers: [3056cf48d519 a30aa9e17223]
	I0806 00:57:08.024894    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:08.035542    4369 logs.go:276] 1 containers: [41cb73ec722a]
	I0806 00:57:08.035607    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:08.048869    4369 logs.go:276] 2 containers: [25fb4eb7829b de9b53846284]
	I0806 00:57:08.048948    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:08.059090    4369 logs.go:276] 0 containers: []
	W0806 00:57:08.059109    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:08.059191    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:08.070079    4369 logs.go:276] 2 containers: [971a619264fc 76efea041512]
	I0806 00:57:08.070094    4369 logs.go:123] Gathering logs for kube-scheduler [3056cf48d519] ...
	I0806 00:57:08.070099    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3056cf48d519"
	I0806 00:57:08.082360    4369 logs.go:123] Gathering logs for storage-provisioner [76efea041512] ...
	I0806 00:57:08.082373    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76efea041512"
	I0806 00:57:08.094351    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:08.094361    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:08.100772    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:08.100788    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:08.142069    4369 logs.go:123] Gathering logs for coredns [b301c8dea344] ...
	I0806 00:57:08.142081    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b301c8dea344"
	I0806 00:57:08.153705    4369 logs.go:123] Gathering logs for storage-provisioner [971a619264fc] ...
	I0806 00:57:08.153716    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 971a619264fc"
	I0806 00:57:08.165551    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:08.165563    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:08.188266    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:57:08.188275    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:08.199950    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:08.199961    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:08.238109    4369 logs.go:123] Gathering logs for kube-scheduler [a30aa9e17223] ...
	I0806 00:57:08.238122    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a30aa9e17223"
	I0806 00:57:08.251440    4369 logs.go:123] Gathering logs for kube-controller-manager [25fb4eb7829b] ...
	I0806 00:57:08.251460    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25fb4eb7829b"
	I0806 00:57:08.269129    4369 logs.go:123] Gathering logs for kube-controller-manager [de9b53846284] ...
	I0806 00:57:08.269139    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de9b53846284"
	I0806 00:57:08.280734    4369 logs.go:123] Gathering logs for etcd [f750ebd6989d] ...
	I0806 00:57:08.280748    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f750ebd6989d"
	I0806 00:57:08.294961    4369 logs.go:123] Gathering logs for etcd [5f751153bd2e] ...
	I0806 00:57:08.294973    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5f751153bd2e"
	I0806 00:57:08.313262    4369 logs.go:123] Gathering logs for kube-proxy [41cb73ec722a] ...
	I0806 00:57:08.313274    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41cb73ec722a"
	I0806 00:57:08.326408    4369 logs.go:123] Gathering logs for kube-apiserver [b1e6d57cf5ab] ...
	I0806 00:57:08.326421    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1e6d57cf5ab"
	I0806 00:57:08.340249    4369 logs.go:123] Gathering logs for kube-apiserver [9b1a1d475261] ...
	I0806 00:57:08.340261    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b1a1d475261"
	I0806 00:57:10.852380    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:14.949649    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:14.949718    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:15.854846    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:15.855078    4369 kubeadm.go:597] duration metric: took 4m4.351931917s to restartPrimaryControlPlane
	W0806 00:57:15.855225    4369 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 00:57:15.855282    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0806 00:57:16.867732    4369 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.012438792s)
	I0806 00:57:16.867794    4369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:57:16.872791    4369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:57:16.875566    4369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:57:16.878893    4369 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:57:16.878900    4369 kubeadm.go:157] found existing configuration files:
	
	I0806 00:57:16.878921    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/admin.conf
	I0806 00:57:16.881618    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:57:16.881646    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:57:16.884100    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/kubelet.conf
	I0806 00:57:16.886982    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:57:16.887005    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:57:16.890184    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/controller-manager.conf
	I0806 00:57:16.892920    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:57:16.892942    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:57:16.895603    4369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/scheduler.conf
	I0806 00:57:16.898628    4369 kubeadm.go:163] "https://control-plane.minikube.internal:50262" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50262 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:57:16.898647    4369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:57:16.901544    4369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:57:16.923115    4369 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0806 00:57:16.923211    4369 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:57:16.978219    4369 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:57:16.978271    4369 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:57:16.978329    4369 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:57:17.027251    4369 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:57:17.035402    4369 out.go:204]   - Generating certificates and keys ...
	I0806 00:57:17.035434    4369 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:57:17.035470    4369 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:57:17.035513    4369 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 00:57:17.035555    4369 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 00:57:17.035595    4369 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 00:57:17.035625    4369 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 00:57:17.035663    4369 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 00:57:17.035698    4369 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 00:57:17.035736    4369 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 00:57:17.035772    4369 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 00:57:17.035793    4369 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 00:57:17.035830    4369 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:57:17.125794    4369 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:57:17.355303    4369 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:57:17.468028    4369 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:57:17.715656    4369 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:57:17.744456    4369 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:57:17.744824    4369 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:57:17.744865    4369 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:57:17.830953    4369 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:57:19.952216    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:19.952246    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:17.836396    4369 out.go:204]   - Booting up control plane ...
	I0806 00:57:17.836447    4369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:57:17.836490    4369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:57:17.836525    4369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:57:17.836588    4369 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:57:17.836670    4369 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 00:57:22.331553    4369 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503143 seconds
	I0806 00:57:22.331701    4369 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:57:22.335322    4369 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:57:22.856119    4369 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:57:22.856570    4369 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-217000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:57:23.362305    4369 kubeadm.go:310] [bootstrap-token] Using token: x5n1wz.pdapcdyzofrirx45
	I0806 00:57:23.366542    4369 out.go:204]   - Configuring RBAC rules ...
	I0806 00:57:23.366631    4369 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:57:23.369009    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:57:23.371469    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:57:23.372683    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:57:23.373837    4369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:57:23.375003    4369 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:57:23.378701    4369 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:57:23.562202    4369 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:57:23.771848    4369 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:57:23.772383    4369 kubeadm.go:310] 
	I0806 00:57:23.772423    4369 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:57:23.772426    4369 kubeadm.go:310] 
	I0806 00:57:23.772467    4369 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:57:23.772470    4369 kubeadm.go:310] 
	I0806 00:57:23.772482    4369 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:57:23.772513    4369 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:57:23.772539    4369 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:57:23.772542    4369 kubeadm.go:310] 
	I0806 00:57:23.772570    4369 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:57:23.772573    4369 kubeadm.go:310] 
	I0806 00:57:23.772600    4369 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:57:23.772609    4369 kubeadm.go:310] 
	I0806 00:57:23.772631    4369 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:57:23.772669    4369 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:57:23.772710    4369 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:57:23.772713    4369 kubeadm.go:310] 
	I0806 00:57:23.772751    4369 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:57:23.772800    4369 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:57:23.772825    4369 kubeadm.go:310] 
	I0806 00:57:23.772866    4369 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x5n1wz.pdapcdyzofrirx45 \
	I0806 00:57:23.772945    4369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f \
	I0806 00:57:23.772962    4369 kubeadm.go:310] 	--control-plane 
	I0806 00:57:23.772966    4369 kubeadm.go:310] 
	I0806 00:57:23.773014    4369 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:57:23.773020    4369 kubeadm.go:310] 
	I0806 00:57:23.773062    4369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x5n1wz.pdapcdyzofrirx45 \
	I0806 00:57:23.773114    4369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f 
	I0806 00:57:23.773169    4369 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:57:23.773182    4369 cni.go:84] Creating CNI manager for ""
	I0806 00:57:23.773196    4369 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:57:23.777062    4369 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 00:57:23.783999    4369 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 00:57:23.787195    4369 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 00:57:23.792808    4369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:57:23.792862    4369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:57:23.792886    4369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-217000 minikube.k8s.io/updated_at=2024_08_06T00_57_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=running-upgrade-217000 minikube.k8s.io/primary=true
	I0806 00:57:23.832217    4369 ops.go:34] apiserver oom_adj: -16
	I0806 00:57:23.832217    4369 kubeadm.go:1113] duration metric: took 39.391333ms to wait for elevateKubeSystemPrivileges
	I0806 00:57:23.832230    4369 kubeadm.go:394] duration metric: took 4m12.383212167s to StartCluster
	I0806 00:57:23.832241    4369 settings.go:142] acquiring lock: {Name:mk345cecdfb5b849013811e238a7c51cfd047298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:57:23.832323    4369 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:57:23.832719    4369 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:57:23.832919    4369 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:57:23.832924    4369 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:57:23.832962    4369 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-217000"
	I0806 00:57:23.833006    4369 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-217000"
	W0806 00:57:23.833013    4369 addons.go:243] addon storage-provisioner should already be in state true
	I0806 00:57:23.833024    4369 host.go:66] Checking if "running-upgrade-217000" exists ...
	I0806 00:57:23.833005    4369 config.go:182] Loaded profile config "running-upgrade-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:57:23.833014    4369 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-217000"
	I0806 00:57:23.833048    4369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-217000"
	I0806 00:57:23.833875    4369 kapi.go:59] client config for running-upgrade-217000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/running-upgrade-217000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101eabf90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:57:23.833986    4369 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-217000"
	W0806 00:57:23.833991    4369 addons.go:243] addon default-storageclass should already be in state true
	I0806 00:57:23.833996    4369 host.go:66] Checking if "running-upgrade-217000" exists ...
	I0806 00:57:23.837059    4369 out.go:177] * Verifying Kubernetes components...
	I0806 00:57:23.837425    4369 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:57:23.841164    4369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:57:23.841172    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	I0806 00:57:23.845001    4369 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:57:24.954471    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:24.954550    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:23.849007    4369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:57:23.853077    4369 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:57:23.853084    4369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:57:23.853090    4369 sshutil.go:53] new ssh client: &{IP:localhost Port:50230 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/running-upgrade-217000/id_rsa Username:docker}
	I0806 00:57:23.935896    4369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:57:23.941056    4369 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:57:23.941095    4369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:57:23.944843    4369 api_server.go:72] duration metric: took 111.912292ms to wait for apiserver process to appear ...
	I0806 00:57:23.944853    4369 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:57:23.944859    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:24.003585    4369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:57:24.027568    4369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:57:29.954891    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:29.954929    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:28.946934    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:28.946984    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:34.957196    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:34.957234    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:33.947338    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:33.947365    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:39.959399    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:39.959446    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:38.947647    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:38.947690    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:44.960870    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:44.961200    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:44.995097    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:57:44.995236    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:45.015447    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:57:45.015550    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:45.029538    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.029552    4539 logs.go:278] No container was found matching "coredns"
	I0806 00:57:45.029612    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:45.041391    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:57:45.041457    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:45.052037    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.052046    4539 logs.go:278] No container was found matching "kube-proxy"
	I0806 00:57:45.052102    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:45.068884    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:57:45.068950    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:45.081002    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.081013    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:45.081063    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:45.091772    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.091787    4539 logs.go:278] No container was found matching "storage-provisioner"
	I0806 00:57:45.091793    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:57:45.091799    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:57:45.105980    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:57:45.105993    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:57:45.121934    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:57:45.121947    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:57:45.136863    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:57:45.136876    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:57:45.159704    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:57:45.159714    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:57:45.174825    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:57:45.174837    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:57:45.192207    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:45.192218    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:45.196693    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:45.196698    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:43.948109    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:43.948132    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:45.307988    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:57:45.308006    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:45.319259    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:57:45.319271    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:57:45.337068    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:45.337080    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:45.359982    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:45.359993    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:45.388239    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:57:45.388251    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:57:47.904729    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:48.948710    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:48.948743    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:53.949467    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:53.949521    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0806 00:57:54.344308    4369 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0806 00:57:54.350342    4369 out.go:177] * Enabled addons: storage-provisioner
	I0806 00:57:52.907070    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:52.907360    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:52.936512    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:57:52.936635    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:52.953790    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:57:52.953862    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:52.966423    4539 logs.go:276] 0 containers: []
	W0806 00:57:52.966442    4539 logs.go:278] No container was found matching "coredns"
	I0806 00:57:52.966515    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:52.977947    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:57:52.978011    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:52.987797    4539 logs.go:276] 0 containers: []
	W0806 00:57:52.987806    4539 logs.go:278] No container was found matching "kube-proxy"
	I0806 00:57:52.987852    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:52.998296    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:57:52.998354    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:53.031169    4539 logs.go:276] 0 containers: []
	W0806 00:57:53.031184    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:53.031236    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:53.044597    4539 logs.go:276] 0 containers: []
	W0806 00:57:53.044607    4539 logs.go:278] No container was found matching "storage-provisioner"
	I0806 00:57:53.044612    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:57:53.044618    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:53.056342    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:53.056351    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:53.061050    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:57:53.061058    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:57:53.075936    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:57:53.075951    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:57:53.091226    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:57:53.091236    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:57:53.108307    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:57:53.108319    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:57:53.128686    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:57:53.128697    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:57:53.152645    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:53.152656    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:53.175831    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:53.175841    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:53.203452    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:53.203463    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:53.246475    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:57:53.246485    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:57:53.259668    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:57:53.259682    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:57:53.273383    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:57:53.273396    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:57:54.360289    4369 addons.go:510] duration metric: took 30.527561333s for enable addons: enabled=[storage-provisioner]
	I0806 00:57:55.789592    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:58.950446    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:58.950499    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:00.791812    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:00.791990    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:00.812240    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:00.812340    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:00.827719    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:00.827792    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:00.840625    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:00.840701    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:00.851294    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:00.851357    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:00.861844    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:00.861927    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:00.873020    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:00.873090    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:00.883171    4539 logs.go:276] 0 containers: []
	W0806 00:58:00.883182    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:00.883234    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:00.893344    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:00.893363    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:00.893369    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:00.917770    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:00.917781    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:00.921590    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:00.921597    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:00.962130    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:00.962141    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:00.975898    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:00.975908    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:00.988085    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:00.988096    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:00.999390    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:00.999402    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:01.011225    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:01.011236    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:01.039714    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:01.039722    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:01.054777    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:01.054788    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:01.078281    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:01.078306    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:01.095244    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:01.095254    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:01.109694    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:01.109706    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:01.122655    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:01.122669    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:01.134231    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:01.134243    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:01.149124    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:01.149141    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:03.669299    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:03.951739    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:03.951762    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:08.671532    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:08.671650    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:08.685491    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:08.685565    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:08.696224    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:08.696285    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:08.707647    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:08.707706    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:08.718443    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:08.718515    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:08.729264    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:08.729335    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:08.739889    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:08.739956    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:08.750856    4539 logs.go:276] 0 containers: []
	W0806 00:58:08.750868    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:08.750922    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:08.761472    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:08.761490    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:08.761495    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:08.778293    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:08.778303    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:08.783030    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:08.783038    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:08.793758    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:08.793770    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:08.816932    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:08.816949    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:08.833831    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:08.833843    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:08.845102    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:08.845112    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:08.869693    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:08.869704    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:08.897216    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:08.897230    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:08.912846    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:08.912857    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:08.927035    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:08.927045    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:08.938568    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:08.938578    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:08.951359    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:08.951370    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:08.965626    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:08.965638    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:08.984806    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:08.984818    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:08.997019    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:08.997031    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:08.953208    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:08.953223    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:11.536308    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:13.955073    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:13.955120    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:16.538602    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:16.539037    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:16.580402    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:16.580552    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:16.602446    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:16.602536    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:16.617602    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:16.617684    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:16.629938    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:16.630020    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:16.640706    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:16.640771    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:16.651516    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:16.651587    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:16.661721    4539 logs.go:276] 0 containers: []
	W0806 00:58:16.661732    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:16.661788    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:16.672503    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:16.672523    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:16.672529    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:16.687114    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:16.687127    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:16.702103    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:16.702118    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:16.722057    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:16.722066    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:16.739873    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:16.739882    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:16.768377    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:16.768385    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:16.791109    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:16.791124    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:16.802621    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:16.802633    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:16.806890    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:16.806899    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:16.841838    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:16.841854    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:16.856010    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:16.856018    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:16.867199    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:16.867211    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:16.878197    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:16.878209    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:16.902101    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:16.902107    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:16.915037    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:16.915048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:16.926124    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:16.926134    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:19.443348    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:18.956072    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:18.956109    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:24.443782    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:24.443921    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:24.461571    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:24.461648    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:24.472125    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:24.472191    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:24.482487    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:24.482580    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:24.492757    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:24.492827    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:24.503057    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:24.503134    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:24.513912    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:24.513981    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:24.529696    4539 logs.go:276] 0 containers: []
	W0806 00:58:24.529710    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:24.529770    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:24.540091    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:24.540112    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:24.540117    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:24.569051    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:24.569064    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:24.602347    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:24.602367    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:24.623290    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:24.623300    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:24.680410    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:24.680424    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:24.692134    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:24.692144    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:24.722248    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:24.722262    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:24.735906    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:24.735916    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:24.751204    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:24.751217    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:24.762854    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:24.762865    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:24.776392    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:24.776401    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:24.800427    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:24.800437    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:24.821643    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:24.821657    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:24.834341    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:24.834355    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:24.851898    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:24.851909    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:24.868446    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:24.868455    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:23.958336    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:23.958463    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:23.979496    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:23.979571    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:23.994942    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:23.995025    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:24.006702    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:24.006777    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:24.017277    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:24.017342    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:24.027897    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:24.027963    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:24.038868    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:24.038930    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:24.049334    4369 logs.go:276] 0 containers: []
	W0806 00:58:24.049345    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:24.049404    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:24.060302    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:24.060315    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:24.060322    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:24.075142    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:24.075153    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:24.087098    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:24.087109    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:24.101508    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:24.101520    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:24.113346    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:24.113359    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:24.125063    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:24.125077    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:24.162800    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:24.162811    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:24.177250    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:24.177259    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:24.189586    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:24.189597    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:24.207343    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:24.207354    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:24.219218    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:24.219229    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:24.243223    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:24.243232    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:24.277144    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:24.277156    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:27.375087    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:26.782476    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:32.377456    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:32.377590    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:32.391962    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:32.392047    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:32.404346    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:32.404421    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:32.416607    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:32.416679    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:32.427625    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:32.427703    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:32.438565    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:32.438631    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:32.449328    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:32.449390    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:32.460108    4539 logs.go:276] 0 containers: []
	W0806 00:58:32.460123    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:32.460182    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:32.470796    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:32.470814    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:32.470819    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:32.499390    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:32.499404    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:32.516847    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:32.516856    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:32.534285    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:32.534295    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:32.538396    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:32.538402    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:32.572709    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:32.572720    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:32.590057    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:32.590068    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:32.604457    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:32.604467    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:32.616902    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:32.616914    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:32.641322    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:32.641337    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:32.653393    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:32.653406    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:32.676874    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:32.676883    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:32.691331    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:32.691342    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:32.703971    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:32.703982    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:32.721686    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:32.721701    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:32.735877    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:32.735890    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:32.747235    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:32.747245    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:31.784843    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:31.785011    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:31.796470    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:31.796545    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:31.807135    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:31.807208    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:31.817817    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:31.817881    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:31.828181    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:31.828243    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:31.838684    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:31.838755    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:31.849513    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:31.849584    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:31.859756    4369 logs.go:276] 0 containers: []
	W0806 00:58:31.859772    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:31.859822    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:31.870716    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:31.870732    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:31.870738    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:31.888615    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:31.888627    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:31.911628    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:31.911639    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:31.922929    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:31.922943    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:31.927790    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:31.927799    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:31.962768    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:31.962779    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:31.977986    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:31.977996    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:31.989685    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:31.989699    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:32.005529    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:32.005539    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:32.018680    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:32.018691    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:32.052513    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:32.052526    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:32.067085    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:32.067095    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:32.078955    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:32.078965    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:34.592631    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:35.260523    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:39.594107    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:39.594291    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:39.609698    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:39.609778    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:39.621578    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:39.621650    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:39.634077    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:39.634138    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:39.646423    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:39.646498    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:39.660774    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:39.660845    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:39.671353    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:39.671422    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:39.685403    4369 logs.go:276] 0 containers: []
	W0806 00:58:39.685414    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:39.685470    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:39.696069    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:39.696086    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:39.696091    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:39.707996    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:39.708008    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:39.742658    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:39.742671    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:39.756738    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:39.756750    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:39.768272    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:39.768283    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:39.780263    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:39.780276    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:39.795038    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:39.795049    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:39.812833    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:39.812842    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:39.824136    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:39.824147    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:39.849424    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:39.849434    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:39.882311    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:39.882320    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:39.886637    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:39.886645    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:39.902392    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:39.902405    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:40.263129    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:40.263263    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:40.278503    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:40.278586    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:40.290724    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:40.290787    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:40.301267    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:40.301328    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:40.312001    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:40.312073    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:40.322668    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:40.322744    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:40.333695    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:40.333763    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:40.345092    4539 logs.go:276] 0 containers: []
	W0806 00:58:40.345105    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:40.345161    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:40.355712    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:40.355732    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:40.355738    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:40.368702    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:40.368712    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:40.379661    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:40.379677    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:40.402999    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:40.403010    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:40.415769    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:40.415780    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:40.420585    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:40.420594    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:40.439250    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:40.439260    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:40.462174    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:40.462187    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:40.474300    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:40.474311    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:40.501881    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:40.501891    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:40.516258    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:40.516270    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:40.533963    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:40.533972    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:40.545725    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:40.545734    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:40.559534    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:40.559545    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:40.579148    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:40.579160    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:40.603136    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:40.603144    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:40.638044    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:40.638056    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:43.152872    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:42.416446    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:48.155499    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:48.155606    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:48.169738    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:48.169820    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:48.180886    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:48.180963    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:48.191360    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:48.191432    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:48.202184    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:48.202256    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:48.212895    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:48.212964    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:48.223660    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:48.223722    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:48.233674    4539 logs.go:276] 0 containers: []
	W0806 00:58:48.233685    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:48.233742    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:48.244013    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:48.244029    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:48.244034    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:48.259345    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:48.259355    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:48.270941    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:48.270951    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:48.283568    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:48.283580    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:48.297141    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:48.297149    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:48.315206    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:48.315218    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:48.334159    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:48.334169    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:48.361270    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:48.361278    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:48.384207    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:48.384217    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:48.401449    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:48.401460    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:48.417872    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:48.417887    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:48.422099    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:48.422108    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:48.457371    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:48.457382    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:48.472430    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:48.472441    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:48.483765    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:48.483775    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:48.499072    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:48.499087    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:48.517075    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:48.517085    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:47.418846    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:47.419263    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:47.459216    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:47.459351    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:47.480654    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:47.480772    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:47.495856    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:47.495927    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:47.507802    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:47.507871    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:47.518408    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:47.518479    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:47.533275    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:47.533338    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:47.544027    4369 logs.go:276] 0 containers: []
	W0806 00:58:47.544039    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:47.544103    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:47.558361    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:47.558378    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:47.558383    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:47.569670    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:47.569681    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:47.581674    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:47.581688    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:47.616647    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:47.616658    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:47.653046    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:47.653059    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:47.667420    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:47.667431    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:47.685329    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:47.685340    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:47.697264    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:47.697274    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:47.712107    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:47.712118    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:47.716843    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:47.716849    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:47.728967    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:47.728977    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:47.748773    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:47.748783    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:47.773758    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:47.773777    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:50.287464    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:51.044340    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:55.289710    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:55.289890    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:55.303272    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:58:55.303350    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:55.314218    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:58:55.314279    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:55.325265    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:58:55.325328    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:55.336494    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:58:55.336569    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:55.346477    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:58:55.346536    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:55.356905    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:58:55.356973    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:55.367122    4369 logs.go:276] 0 containers: []
	W0806 00:58:55.367140    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:55.367201    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:55.378195    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:58:55.378210    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:55.378215    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:55.410872    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:58:55.410883    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:58:55.426545    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:58:55.426558    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:58:55.443733    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:58:55.443744    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:58:55.462154    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:58:55.462166    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:58:55.473533    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:55.473546    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:55.477979    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:55.477987    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:55.512191    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:58:55.512201    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:58:55.526823    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:58:55.526835    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:58:55.549177    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:58:55.549187    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:58:55.562465    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:58:55.562478    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:58:55.574631    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:55.574641    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:55.598599    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:58:55.598609    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:56.046815    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:56.047100    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:56.068194    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:56.068287    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:56.083080    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:56.083151    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:56.095540    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:56.095605    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:56.106651    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:56.106729    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:56.123890    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:56.123960    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:56.134882    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:56.134955    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:56.145166    4539 logs.go:276] 0 containers: []
	W0806 00:58:56.145178    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:56.145238    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:56.156101    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:56.156118    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:56.156124    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:56.167571    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:56.167582    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:56.186704    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:56.186715    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:56.190898    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:56.190908    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:56.203638    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:56.203650    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:56.217407    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:56.217417    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:56.240382    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:56.240394    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:56.261435    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:56.261447    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:56.291042    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:56.291060    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:56.318633    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:56.318644    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:56.355104    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:56.355116    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:56.369749    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:56.369761    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:56.381330    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:56.381340    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:56.398883    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:56.398899    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:56.413462    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:56.413477    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:56.429033    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:56.429048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:56.442964    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:56.442976    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:58.957372    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:58.112584    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:03.959608    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:03.959698    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:03.970643    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:03.970707    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:03.981416    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:03.981488    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:03.991772    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:03.991834    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:04.002384    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:04.002446    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:04.012791    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:04.012850    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:04.023664    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:04.023730    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:04.034393    4539 logs.go:276] 0 containers: []
	W0806 00:59:04.034406    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:04.034465    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:04.044540    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:04.044557    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:04.044563    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:04.080847    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:04.080860    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:04.108246    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:04.108257    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:04.126714    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:04.126724    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:04.144629    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:04.144640    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:04.155839    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:04.155852    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:04.167858    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:04.167874    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:04.182662    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:04.182673    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:04.194387    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:04.194396    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:04.211986    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:04.211996    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:04.241750    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:04.241761    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:04.246226    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:04.246234    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:04.261901    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:04.261911    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:04.278145    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:04.278157    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:04.292407    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:04.292417    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:04.303900    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:04.303912    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:04.315510    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:04.315521    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:03.114987    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:03.115225    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:03.143041    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:03.143177    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:03.161267    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:03.161356    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:03.174978    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:03.175051    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:03.186110    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:03.186179    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:03.196641    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:03.196707    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:03.207585    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:03.207652    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:03.218581    4369 logs.go:276] 0 containers: []
	W0806 00:59:03.218593    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:03.218650    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:03.229078    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:03.229094    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:03.229099    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:03.245264    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:03.245276    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:03.256550    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:03.256559    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:03.268473    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:03.268484    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:03.302084    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:03.302092    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:03.337668    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:03.337682    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:03.349395    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:03.349409    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:03.368737    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:03.368750    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:03.387104    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:03.387115    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:03.398740    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:03.398752    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:03.423807    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:03.423817    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:03.435546    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:03.435556    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:03.440500    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:03.440509    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:06.843767    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:05.956850    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:11.846448    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:11.846580    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:11.858633    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:11.858707    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:11.871817    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:11.871888    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:11.882175    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:11.882235    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:11.894285    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:11.894354    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:11.905467    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:11.905527    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:11.916113    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:11.916175    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:11.925795    4539 logs.go:276] 0 containers: []
	W0806 00:59:11.925810    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:11.925861    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:11.936278    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:11.936299    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:11.936305    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:11.940519    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:11.940529    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:11.957738    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:11.957748    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:11.968865    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:11.968876    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:11.980414    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:11.980424    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:11.994752    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:11.994762    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:12.005919    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:12.005933    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:12.017139    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:12.017154    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:12.042131    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:12.042138    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:12.070953    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:12.070964    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:12.087788    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:12.087799    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:12.101601    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:12.101612    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:12.116941    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:12.116953    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:12.134791    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:12.134800    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:12.173719    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:12.173730    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:12.188462    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:12.188474    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:12.212071    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:12.212080    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:14.728519    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:10.959182    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:10.959397    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:10.983861    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:10.983968    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:11.001104    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:11.001183    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:11.014273    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:11.014338    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:11.026645    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:11.026713    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:11.037387    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:11.037459    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:11.049057    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:11.049116    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:11.061630    4369 logs.go:276] 0 containers: []
	W0806 00:59:11.061643    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:11.061697    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:11.081317    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:11.081334    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:11.081339    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:11.116623    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:11.116634    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:11.131124    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:11.131135    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:11.145539    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:11.145549    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:11.158062    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:11.158073    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:11.170748    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:11.170761    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:11.194634    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:11.194642    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:11.229349    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:11.229357    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:11.233813    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:11.233822    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:11.249772    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:11.249785    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:11.269832    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:11.269841    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:11.283904    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:11.283915    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:11.301511    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:11.301521    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:13.813977    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:19.730954    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:19.731161    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:19.753059    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:19.753149    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:19.771574    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:19.771658    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:19.783348    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:19.783411    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:19.793919    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:19.793988    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:19.804079    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:19.804136    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:19.814457    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:19.814529    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:19.824785    4539 logs.go:276] 0 containers: []
	W0806 00:59:19.824798    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:19.824863    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:19.835967    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:19.835986    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:19.835991    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:19.850271    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:19.850284    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:19.874935    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:19.874942    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:19.879493    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:19.879499    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:19.914315    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:19.914328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:19.928625    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:19.928635    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:19.941312    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:19.941328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:19.952769    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:19.952779    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:19.966656    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:19.966666    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:19.989939    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:19.989950    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:20.001335    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:20.001349    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:20.012863    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:20.012875    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:20.030986    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:20.031000    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:20.049097    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:20.049110    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:20.078574    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:20.078583    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:20.102024    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:20.102035    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:20.119249    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:20.119262    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:18.816186    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:18.816393    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:18.832337    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:18.832419    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:18.844279    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:18.844348    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:18.855455    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:18.855516    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:18.866198    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:18.866303    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:18.876354    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:18.876433    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:18.886526    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:18.886598    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:18.898076    4369 logs.go:276] 0 containers: []
	W0806 00:59:18.898087    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:18.898143    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:18.912724    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:18.912740    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:18.912745    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:18.927317    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:18.927327    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:18.939085    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:18.939096    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:18.958084    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:18.958096    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:18.969811    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:18.969824    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:18.993044    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:18.993065    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:19.026195    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:19.026203    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:19.041502    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:19.041514    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:19.055348    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:19.055360    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:19.079359    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:19.079372    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:19.091778    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:19.091794    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:19.096565    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:19.096572    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:19.130255    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:19.130266    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:22.640643    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:21.644237    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:27.642926    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:27.643287    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:27.675108    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:27.675239    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:27.693940    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:27.694041    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:27.708700    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:27.708780    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:27.720898    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:27.720971    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:27.734158    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:27.734222    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:27.744904    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:27.744979    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:27.755370    4539 logs.go:276] 0 containers: []
	W0806 00:59:27.755382    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:27.755438    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:27.766364    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:27.766382    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:27.766389    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:27.801720    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:27.801735    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:27.816248    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:27.816261    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:27.831439    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:27.831450    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:27.843522    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:27.843532    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:27.873261    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:27.873273    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:27.886595    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:27.886609    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:27.901807    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:27.901817    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:27.913282    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:27.913292    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:27.924818    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:27.924830    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:27.948459    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:27.948469    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:27.952866    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:27.952874    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:27.966919    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:27.966933    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:27.982045    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:27.982057    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:27.999409    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:27.999420    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:28.023036    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:28.023048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:28.040835    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:28.040852    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:26.646813    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:26.647134    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:26.677964    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:26.678101    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:26.698114    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:26.698211    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:26.712360    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:26.712440    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:26.724272    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:26.724339    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:26.736004    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:26.736067    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:26.746634    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:26.746701    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:26.756654    4369 logs.go:276] 0 containers: []
	W0806 00:59:26.756666    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:26.756732    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:26.770929    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:26.770948    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:26.770953    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:26.784173    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:26.784185    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:26.788713    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:26.788721    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:26.824286    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:26.824300    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:26.838525    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:26.838536    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:26.849889    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:26.849900    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:26.862258    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:26.862269    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:26.876657    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:26.876669    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:26.888602    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:26.888613    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:26.906335    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:26.906346    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:26.941507    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:26.941528    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:26.955776    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:26.955786    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:26.982021    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:26.982036    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:29.497344    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:30.554778    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:34.499392    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:34.499638    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:34.523634    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:34.523753    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:34.540243    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:34.540313    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:34.553434    4369 logs.go:276] 2 containers: [e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:34.553506    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:34.571313    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:34.571378    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:34.582085    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:34.582146    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:34.593007    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:34.593077    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:34.603449    4369 logs.go:276] 0 containers: []
	W0806 00:59:34.603464    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:34.603524    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:34.613603    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:34.613619    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:34.613624    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:34.631535    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:34.631550    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:34.655700    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:34.655710    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:34.667097    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:34.667107    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:34.703513    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:34.703524    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:34.718089    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:34.718103    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:34.732242    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:34.732251    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:34.743640    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:34.743665    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:34.758145    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:34.758157    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:34.791333    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:34.791340    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:34.795805    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:34.795814    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:34.807426    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:34.807437    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:34.819278    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:34.819289    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:35.557396    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:35.557629    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:35.582923    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:35.583045    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:35.599308    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:35.599385    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:35.612262    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:35.612332    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:35.625646    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:35.625713    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:35.636338    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:35.636431    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:35.647123    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:35.647197    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:35.658085    4539 logs.go:276] 0 containers: []
	W0806 00:59:35.658095    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:35.658148    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:35.668456    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:35.668474    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:35.668480    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:35.672886    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:35.672893    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:35.707783    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:35.707794    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:35.719085    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:35.719097    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:35.737172    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:35.737182    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:35.761233    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:35.761242    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:35.790790    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:35.790801    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:35.803435    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:35.803448    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:35.817411    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:35.817424    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:35.828755    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:35.828765    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:35.851897    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:35.851907    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:35.865020    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:35.865031    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:35.878984    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:35.878995    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:35.894181    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:35.894195    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:35.920471    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:35.920483    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:35.934996    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:35.935007    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:35.946404    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:35.946415    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:38.460170    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:37.333126    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:43.462474    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:43.462693    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:43.478932    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:43.479013    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:43.492065    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:43.492144    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:43.503086    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:43.503154    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:43.514124    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:43.514196    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:43.524389    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:43.524456    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:43.538394    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:43.538463    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:43.553279    4539 logs.go:276] 0 containers: []
	W0806 00:59:43.553293    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:43.553350    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:43.563545    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:43.563562    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:43.563567    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:43.592915    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:43.592925    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:43.610089    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:43.610102    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:43.620996    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:43.621009    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:43.656811    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:43.656824    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:43.671142    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:43.671155    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:43.681912    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:43.681924    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:43.693494    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:43.693504    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:43.704542    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:43.704553    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:43.729344    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:43.729355    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:43.745192    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:43.745202    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:43.770683    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:43.770694    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:43.782856    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:43.782866    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:43.788044    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:43.788055    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:43.801334    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:43.801344    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:43.816844    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:43.816858    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:43.833672    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:43.833683    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:42.335433    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:42.335639    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:42.351120    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:42.351200    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:42.362981    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:42.363049    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:42.373385    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:42.373456    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:42.384097    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:42.384167    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:42.394781    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:42.394850    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:42.405030    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:42.405094    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:42.414944    4369 logs.go:276] 0 containers: []
	W0806 00:59:42.414957    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:42.415025    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:42.426574    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:42.426594    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:42.426600    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:42.442132    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:42.442141    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:42.454006    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:42.454017    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:42.488556    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:42.488563    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:42.524410    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:42.524421    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:42.541935    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:42.541946    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:42.555551    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:42.555564    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:42.567421    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:42.567432    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:42.582006    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:42.582017    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:42.596317    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 00:59:42.596329    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 00:59:42.608026    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:42.608039    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:42.633448    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:42.633455    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:42.644753    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:42.644765    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:42.649248    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 00:59:42.649258    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 00:59:42.660357    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:42.660367    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:45.174303    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:46.352836    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:50.176904    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:50.177131    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:50.197796    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:50.197909    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:50.213727    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:50.213794    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:50.225551    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:50.225624    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:50.236380    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:50.236450    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:50.247111    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:50.247175    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:50.258193    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:50.258259    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:50.267942    4369 logs.go:276] 0 containers: []
	W0806 00:59:50.267957    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:50.268009    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:50.278825    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:50.278843    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 00:59:50.278849    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 00:59:50.290343    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:50.290354    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:50.302033    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:50.302043    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:50.323066    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:50.323080    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:50.335055    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:50.335068    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:50.339967    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:50.339976    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:50.351795    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:50.351804    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:50.377213    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:50.377222    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:50.411054    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:50.411061    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:50.446798    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:50.446812    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:50.462127    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 00:59:50.462141    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 00:59:50.473448    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:50.473458    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:50.485615    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:50.485629    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:50.499441    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:50.499451    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:50.515904    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:50.515913    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:51.355154    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:51.355344    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:51.376205    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:51.376301    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:51.392054    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:51.392129    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:51.405684    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:51.405756    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:51.423739    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:51.423810    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:51.434311    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:51.434378    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:51.444993    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:51.445058    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:51.456309    4539 logs.go:276] 0 containers: []
	W0806 00:59:51.456322    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:51.456380    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:51.466805    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:51.466825    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:51.466831    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:51.471616    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:51.471625    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:51.486391    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:51.486401    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:51.498196    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:51.498207    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:51.521662    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:51.521672    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:51.534783    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:51.534794    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:51.564889    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:51.564898    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:51.578811    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:51.578820    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:51.602336    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:51.602348    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:51.614399    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:51.614413    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:51.632699    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:51.632710    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:51.668309    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:51.668322    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:51.681692    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:51.681705    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:51.694283    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:51.694294    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:51.712176    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:51.712187    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:51.723805    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:51.723816    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:51.738162    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:51.738175    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:54.258262    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:53.029815    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:59.260547    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:59.260700    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:59.271943    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:59.272014    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:59.282426    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:59.282491    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:59.292937    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:59.293008    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:59.303792    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:59.303864    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:59.314846    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:59.314915    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:59.326177    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:59.326240    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:59.336681    4539 logs.go:276] 0 containers: []
	W0806 00:59:59.336693    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:59.336743    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:59.347392    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:59.347409    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:59.347417    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:59.351994    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:59.352003    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:59.387033    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:59.387044    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:59.411138    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:59.411150    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:59.428338    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:59.428351    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:59.440429    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:59.440440    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:59.451677    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:59.451688    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:59.476289    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:59.476296    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:59.490588    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:59.490601    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:59.506117    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:59.506133    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:59.518030    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:59.518040    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:59.535559    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:59.535569    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:59.547003    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:59.547013    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:59.558957    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:59.558967    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:59.588769    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:59.588777    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:59.602554    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:59.602564    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:59.614451    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:59.614460    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:58.030665    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:58.030849    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:58.043768    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 00:59:58.043838    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:58.054977    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 00:59:58.055048    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:58.070188    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 00:59:58.070261    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:58.080812    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 00:59:58.080877    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:58.091460    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 00:59:58.091518    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:58.102569    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 00:59:58.102634    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:58.112750    4369 logs.go:276] 0 containers: []
	W0806 00:59:58.112761    4369 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:58.112817    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:58.133501    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 00:59:58.133521    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:58.133527    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:58.168381    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 00:59:58.168392    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 00:59:58.180322    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 00:59:58.180333    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 00:59:58.192400    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 00:59:58.192410    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 00:59:58.204464    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 00:59:58.204474    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 00:59:58.222001    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 00:59:58.222012    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 00:59:58.234025    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:58.234035    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:58.238669    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 00:59:58.238675    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 00:59:58.252731    4369 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:58.252745    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:58.277754    4369 logs.go:123] Gathering logs for container status ...
	I0806 00:59:58.277762    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:58.289180    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 00:59:58.289191    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 00:59:58.304186    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:58.304200    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:58.337838    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 00:59:58.337848    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 00:59:58.352181    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 00:59:58.352191    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 00:59:58.364676    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 00:59:58.364687    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:02.134598    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:00.878025    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:07.136802    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:07.136991    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:07.159406    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:07.159529    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:07.175506    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:07.175596    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:07.192377    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:07.192448    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:07.202807    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:07.202883    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:07.214274    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:07.214341    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:07.225255    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:07.225319    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:07.235819    4539 logs.go:276] 0 containers: []
	W0806 01:00:07.235830    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:07.235892    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:07.246746    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:07.246764    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:07.246770    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:07.276064    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:07.276076    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:07.299528    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:07.299544    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:07.312181    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:07.312191    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:07.336471    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:07.336486    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:07.371038    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:07.371051    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:07.384317    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:07.384328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:07.398615    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:07.398628    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:07.413795    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:07.413808    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:07.430272    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:07.430285    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:07.446737    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:07.446749    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:07.464993    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:07.465004    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:07.476339    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:07.476351    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:07.480733    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:07.480741    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:07.495819    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:07.495834    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:07.507533    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:07.507550    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:07.525820    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:07.525832    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:10.040148    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:05.880337    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:05.880530    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:05.895650    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:05.895726    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:05.907843    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:05.907913    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:05.918686    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:05.918761    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:05.929083    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:05.929152    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:05.939621    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:05.939685    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:05.950328    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:05.950392    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:05.961591    4369 logs.go:276] 0 containers: []
	W0806 01:00:05.961602    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:05.961663    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:05.972599    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:05.972618    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:05.972624    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:05.983867    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:05.983881    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:05.995671    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:05.995682    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:06.013530    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:06.013543    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:06.017948    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:06.017957    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:06.041109    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:06.041119    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:06.053027    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:06.053039    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:06.067041    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:06.067050    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:06.078859    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:06.078873    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:06.093251    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:06.093260    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:06.127439    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:06.127451    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:06.139493    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:06.139506    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:06.154660    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:06.154676    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:06.166188    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:06.166202    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:06.178199    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:06.178210    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:08.715487    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:15.042561    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:15.042987    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:15.084440    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:15.084581    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:15.103881    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:15.103984    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:15.117936    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:15.118012    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:15.129915    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:15.129994    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:15.141177    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:15.141242    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:15.152377    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:15.152451    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:15.163949    4539 logs.go:276] 0 containers: []
	W0806 01:00:15.163962    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:15.164034    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:15.175147    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:15.175165    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:15.175171    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:15.211667    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:15.211677    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:13.717702    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:13.717920    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:13.735203    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:13.735296    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:13.749397    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:13.749475    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:13.760842    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:13.760916    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:13.771782    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:13.771854    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:13.785616    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:13.785689    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:13.796117    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:13.796175    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:13.806568    4369 logs.go:276] 0 containers: []
	W0806 01:00:13.806583    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:13.806645    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:13.816931    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:13.816954    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:13.816959    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:13.856488    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:13.856499    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:13.874799    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:13.874809    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:13.909688    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:13.909698    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:13.923373    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:13.923383    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:13.936116    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:13.936127    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:13.951297    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:13.951307    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:13.977472    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:13.977482    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:13.996471    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:13.996483    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:14.008341    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:14.008353    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:14.020039    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:14.020049    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:14.024678    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:14.024684    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:14.036434    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:14.036448    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:14.048910    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:14.048924    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:14.063549    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:14.063562    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:15.224805    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:15.224817    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:15.236085    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:15.236097    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:15.251290    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:15.251303    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:15.263465    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:15.263479    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:15.281402    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:15.281416    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:15.306038    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:15.306048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:15.323271    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:15.323285    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:15.340097    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:15.340111    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:15.357388    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:15.357401    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:15.369144    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:15.369160    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:15.395307    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:15.395321    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:15.407189    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:15.407201    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:15.421339    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:15.421353    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:15.450544    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:15.450552    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:15.455303    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:15.455312    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:17.972443    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:16.582028    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:22.974839    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:22.974919    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:22.987929    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:22.988002    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:22.998380    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:22.998448    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:23.012280    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:23.012343    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:23.022686    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:23.022753    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:23.033404    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:23.033469    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:23.044636    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:23.044706    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:23.054998    4539 logs.go:276] 0 containers: []
	W0806 01:00:23.055013    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:23.055068    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:23.065786    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:23.065808    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:23.065814    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:23.078898    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:23.078909    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:23.115803    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:23.115814    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:23.133391    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:23.133400    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:23.158157    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:23.158165    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:23.171108    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:23.171121    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:23.184838    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:23.184848    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:23.196370    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:23.196382    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:23.220402    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:23.220415    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:23.235479    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:23.235491    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:23.253318    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:23.253328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:23.266196    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:23.266206    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:23.295376    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:23.295385    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:23.299417    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:23.299423    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:23.312785    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:23.312796    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:23.332022    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:23.332032    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:23.343946    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:23.343956    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:21.584224    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:21.584435    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:21.602285    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:21.602369    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:21.614682    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:21.614751    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:21.625144    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:21.625211    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:21.637991    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:21.638055    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:21.649085    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:21.649153    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:21.659817    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:21.659881    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:21.669920    4369 logs.go:276] 0 containers: []
	W0806 01:00:21.669931    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:21.669985    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:21.679715    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:21.679735    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:21.679742    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:21.695781    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:21.695792    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:21.710853    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:21.710868    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:21.723312    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:21.723322    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:21.728398    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:21.728404    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:21.740441    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:21.740452    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:21.757469    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:21.757480    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:21.783821    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:21.783835    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:21.818638    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:21.818646    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:21.830449    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:21.830460    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:21.844732    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:21.844743    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:21.856292    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:21.856304    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:21.890325    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:21.890337    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:21.905025    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:21.905035    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:21.917062    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:21.917074    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:24.429577    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:25.857180    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:29.431905    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:29.432027    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:29.443489    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:29.443569    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:29.454274    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:29.454349    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:29.465429    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:29.465499    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:29.475764    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:29.475834    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:29.485983    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:29.486052    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:29.496719    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:29.496785    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:29.507400    4369 logs.go:276] 0 containers: []
	W0806 01:00:29.507409    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:29.507462    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:29.518380    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:29.518397    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:29.518402    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:29.530135    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:29.530148    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:29.541646    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:29.541658    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:29.556004    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:29.556017    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:29.574481    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:29.574492    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:29.596881    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:29.596894    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:29.601321    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:29.601328    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:29.634975    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:29.634990    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:29.650509    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:29.650523    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:29.661972    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:29.661986    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:29.673656    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:29.673669    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:29.690874    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:29.690884    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:29.703171    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:29.703183    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:29.737102    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:29.737110    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:29.749032    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:29.749045    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:30.859537    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:30.859860    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:30.898385    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:30.898525    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:30.917778    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:30.917863    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:30.933254    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:30.933315    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:30.944676    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:30.944746    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:30.955887    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:30.955960    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:30.966925    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:30.966987    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:30.978014    4539 logs.go:276] 0 containers: []
	W0806 01:00:30.978031    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:30.978089    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:30.989583    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:30.989606    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:30.989612    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:31.018160    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:31.018168    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:31.032130    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:31.032143    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:31.044537    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:31.044547    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:31.059713    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:31.059724    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:31.078999    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:31.079010    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:31.095320    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:31.095331    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:31.119851    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:31.119859    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:31.137598    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:31.137608    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:31.172863    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:31.172875    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:31.185923    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:31.185934    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:31.198110    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:31.198120    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:31.222288    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:31.222301    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:31.233829    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:31.233842    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:31.237952    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:31.237962    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:31.252606    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:31.252617    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:31.268181    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:31.268192    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:33.781961    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:32.276320    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:38.784194    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:38.784334    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:38.797626    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:38.797698    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:38.808450    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:38.808519    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:38.819544    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:38.819609    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:38.831954    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:38.832021    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:38.842647    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:38.842712    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:38.853666    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:38.853731    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:38.868324    4539 logs.go:276] 0 containers: []
	W0806 01:00:38.868337    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:38.868394    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:38.879170    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:38.879187    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:38.879195    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:38.891783    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:38.891794    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:38.906196    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:38.906210    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:38.917777    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:38.917790    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:38.932456    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:38.932474    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:38.949896    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:38.949908    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:38.961507    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:38.961519    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:38.993004    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:38.993021    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:39.007779    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:39.007791    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:39.025275    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:39.025285    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:39.037150    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:39.037161    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:39.041363    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:39.041369    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:39.075852    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:39.075864    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:39.090309    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:39.090320    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:39.105861    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:39.105871    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:39.117669    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:39.117679    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:39.141213    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:39.141225    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:37.277721    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:37.277927    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:37.305970    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:37.306082    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:37.320741    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:37.320813    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:37.334781    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:37.334848    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:37.349394    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:37.349457    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:37.360025    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:37.360102    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:37.370829    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:37.370896    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:37.381164    4369 logs.go:276] 0 containers: []
	W0806 01:00:37.381176    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:37.381230    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:37.391369    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:37.391389    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:37.391395    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:37.426190    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:37.426199    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:37.440649    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:37.440662    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:37.452966    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:37.452977    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:37.464904    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:37.464915    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:37.481731    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:37.481742    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:37.505148    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:37.505155    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:37.509306    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:37.509315    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:37.544428    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:37.544439    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:37.568999    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:37.569010    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:37.581633    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:37.581646    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:37.594513    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:37.594524    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:37.609563    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:37.609573    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:37.621594    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:37.621606    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:37.645125    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:37.645138    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:40.159144    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:41.667128    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:45.161500    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:45.161633    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:45.176294    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:45.176368    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:45.188243    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:45.188313    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:45.198867    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:45.198937    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:45.210298    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:45.210370    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:45.222100    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:45.222164    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:45.232780    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:45.232848    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:45.243304    4369 logs.go:276] 0 containers: []
	W0806 01:00:45.243317    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:45.243378    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:45.253654    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:45.253673    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:45.253679    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:45.288990    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:45.289001    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:45.307060    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:45.307070    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:45.319026    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:45.319038    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:45.337898    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:45.337909    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:45.352163    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:45.352178    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:45.364381    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:45.364395    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:45.379323    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:45.379334    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:45.391162    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:45.391173    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:45.424970    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:45.424982    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:45.436935    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:45.436946    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:45.461575    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:45.461585    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:45.465855    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:45.465864    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:45.477412    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:45.477426    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:45.488766    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:45.488777    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:46.669423    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:46.669488    4539 kubeadm.go:597] duration metric: took 4m3.372749s to restartPrimaryControlPlane
	W0806 01:00:46.669540    4539 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 01:00:46.669566    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0806 01:00:47.598904    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 01:00:47.604131    4539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 01:00:47.607043    4539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 01:00:47.609637    4539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 01:00:47.609645    4539 kubeadm.go:157] found existing configuration files:
	
	I0806 01:00:47.609669    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0806 01:00:47.612135    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 01:00:47.612160    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 01:00:47.614900    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0806 01:00:47.617352    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 01:00:47.617371    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 01:00:47.620760    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0806 01:00:47.624270    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 01:00:47.624296    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 01:00:47.627106    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0806 01:00:47.629807    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 01:00:47.629831    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 01:00:47.633113    4539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 01:00:47.650078    4539 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0806 01:00:47.650106    4539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 01:00:47.700320    4539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 01:00:47.700373    4539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 01:00:47.700415    4539 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 01:00:47.749898    4539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 01:00:47.755113    4539 out.go:204]   - Generating certificates and keys ...
	I0806 01:00:47.755151    4539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 01:00:47.755189    4539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 01:00:47.755240    4539 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 01:00:47.755273    4539 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 01:00:47.755325    4539 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 01:00:47.755355    4539 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 01:00:47.755382    4539 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 01:00:47.755428    4539 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 01:00:47.755470    4539 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 01:00:47.755511    4539 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 01:00:47.755532    4539 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 01:00:47.755565    4539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 01:00:47.848780    4539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 01:00:47.961286    4539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 01:00:48.027964    4539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 01:00:48.196225    4539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 01:00:48.227929    4539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 01:00:48.228277    4539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 01:00:48.228300    4539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 01:00:48.298803    4539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 01:00:48.303213    4539 out.go:204]   - Booting up control plane ...
	I0806 01:00:48.303259    4539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 01:00:48.303306    4539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 01:00:48.303342    4539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 01:00:48.303385    4539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 01:00:48.303473    4539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 01:00:48.003207    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:53.309380    4539 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007648 seconds
	I0806 01:00:53.309433    4539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 01:00:53.313795    4539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 01:00:53.836946    4539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 01:00:53.837091    4539 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-180000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 01:00:54.340659    4539 kubeadm.go:310] [bootstrap-token] Using token: irs0sz.hqxmy2t5x8gei7l0
	I0806 01:00:54.346548    4539 out.go:204]   - Configuring RBAC rules ...
	I0806 01:00:54.346610    4539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 01:00:54.346663    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 01:00:54.348419    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 01:00:54.354097    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 01:00:54.354968    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 01:00:54.355715    4539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 01:00:54.358826    4539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 01:00:54.523972    4539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 01:00:54.744808    4539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 01:00:54.745305    4539 kubeadm.go:310] 
	I0806 01:00:54.745337    4539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 01:00:54.745340    4539 kubeadm.go:310] 
	I0806 01:00:54.745384    4539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 01:00:54.745390    4539 kubeadm.go:310] 
	I0806 01:00:54.745404    4539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 01:00:54.745441    4539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 01:00:54.745469    4539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 01:00:54.745472    4539 kubeadm.go:310] 
	I0806 01:00:54.745507    4539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 01:00:54.745513    4539 kubeadm.go:310] 
	I0806 01:00:54.745532    4539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 01:00:54.745535    4539 kubeadm.go:310] 
	I0806 01:00:54.745579    4539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 01:00:54.745617    4539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 01:00:54.745676    4539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 01:00:54.745681    4539 kubeadm.go:310] 
	I0806 01:00:54.745715    4539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 01:00:54.745753    4539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 01:00:54.745756    4539 kubeadm.go:310] 
	I0806 01:00:54.745790    4539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token irs0sz.hqxmy2t5x8gei7l0 \
	I0806 01:00:54.745855    4539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f \
	I0806 01:00:54.745866    4539 kubeadm.go:310] 	--control-plane 
	I0806 01:00:54.745868    4539 kubeadm.go:310] 
	I0806 01:00:54.745917    4539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 01:00:54.745920    4539 kubeadm.go:310] 
	I0806 01:00:54.745965    4539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token irs0sz.hqxmy2t5x8gei7l0 \
	I0806 01:00:54.746019    4539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f 
	I0806 01:00:54.746248    4539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 01:00:54.746284    4539 cni.go:84] Creating CNI manager for ""
	I0806 01:00:54.746293    4539 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:00:54.750066    4539 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 01:00:54.759123    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 01:00:54.762371    4539 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 01:00:54.767127    4539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 01:00:54.767173    4539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:00:54.767200    4539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-180000 minikube.k8s.io/updated_at=2024_08_06T01_00_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=stopped-upgrade-180000 minikube.k8s.io/primary=true
	I0806 01:00:54.813807    4539 kubeadm.go:1113] duration metric: took 46.668708ms to wait for elevateKubeSystemPrivileges
	I0806 01:00:54.813819    4539 ops.go:34] apiserver oom_adj: -16
	I0806 01:00:54.813825    4539 kubeadm.go:394] duration metric: took 4m11.531061208s to StartCluster
	I0806 01:00:54.813835    4539 settings.go:142] acquiring lock: {Name:mk345cecdfb5b849013811e238a7c51cfd047298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:00:54.813930    4539 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:00:54.814356    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:00:54.814557    4539 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:00:54.814596    4539 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 01:00:54.814630    4539 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-180000"
	I0806 01:00:54.814639    4539 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-180000"
	I0806 01:00:54.814653    4539 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:00:54.814643    4539 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-180000"
	I0806 01:00:54.814657    4539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-180000"
	W0806 01:00:54.814663    4539 addons.go:243] addon storage-provisioner should already be in state true
	I0806 01:00:54.814675    4539 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0806 01:00:54.822232    4539 out.go:177] * Verifying Kubernetes components...
	I0806 01:00:54.824993    4539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 01:00:54.825019    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:00:54.825840    4539 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040a7f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 01:00:54.825993    4539 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-180000"
	W0806 01:00:54.826000    4539 addons.go:243] addon default-storageclass should already be in state true
	I0806 01:00:54.826011    4539 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0806 01:00:54.826615    4539 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 01:00:54.826621    4539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 01:00:54.826627    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 01:00:54.829213    4539 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 01:00:54.829220    4539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 01:00:54.829226    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 01:00:54.896524    4539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 01:00:54.901839    4539 api_server.go:52] waiting for apiserver process to appear ...
	I0806 01:00:54.901880    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 01:00:54.906277    4539 api_server.go:72] duration metric: took 91.7085ms to wait for apiserver process to appear ...
	I0806 01:00:54.906287    4539 api_server.go:88] waiting for apiserver healthz status ...
	I0806 01:00:54.906294    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:54.915060    4539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 01:00:54.915405    4539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 01:00:53.005430    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:53.005611    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:53.016542    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:00:53.016614    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:53.027053    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:00:53.027123    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:53.039440    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:00:53.039513    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:53.054568    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:00:53.054634    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:53.065062    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:00:53.065117    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:53.076090    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:00:53.076158    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:53.087207    4369 logs.go:276] 0 containers: []
	W0806 01:00:53.087225    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:53.087281    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:53.097622    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:00:53.097639    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:53.097643    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:53.132559    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:00:53.132570    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:00:53.144847    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:53.144859    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:53.181110    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:00:53.181123    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:00:53.193975    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:00:53.193989    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:00:53.211537    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:00:53.211547    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:00:53.227894    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:00:53.227906    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:00:53.239776    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:53.239786    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:53.264271    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:53.264279    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:53.268595    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:00:53.268603    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:00:53.282901    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:00:53.282911    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:00:53.296840    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:00:53.296850    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:00:53.309023    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:00:53.309035    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:00:53.325899    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:00:53.325912    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:00:53.342461    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:00:53.342477    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:55.856025    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:59.908413    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:59.908474    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:00.857685    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:00.857855    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:01:04.909232    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:04.909279    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:00.879459    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:01:00.879553    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:01:00.894804    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:01:00.894882    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:01:00.907866    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:01:00.907942    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:01:00.922507    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:01:00.922573    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:01:00.937834    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:01:00.937905    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:01:00.949240    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:01:00.949308    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:01:00.963537    4369 logs.go:276] 0 containers: []
	W0806 01:01:00.963549    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:01:00.963608    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:01:00.973896    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:01:00.973914    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:01:00.973919    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:01:00.988087    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:01:00.988101    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:01:00.999760    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:01:00.999772    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:01:01.013027    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:01:01.013037    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:01:01.027829    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:01:01.027840    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:01:01.053641    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:01:01.053664    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:01:01.079162    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:01:01.079174    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:01:01.116379    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:01:01.116396    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:01:01.130665    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:01:01.130682    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:01:01.142934    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:01:01.142946    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:01:01.155126    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:01:01.155137    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:01:01.172845    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:01:01.172857    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:01:01.184541    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:01:01.184551    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:01:01.189079    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:01:01.189086    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:01:01.225201    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:01:01.225212    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:01:03.738906    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:09.909782    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:09.909802    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:08.741185    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:08.741271    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:01:08.752609    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:01:08.752682    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:01:08.763157    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:01:08.763228    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:01:08.774050    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:01:08.774118    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:01:08.785210    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:01:08.785281    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:01:08.796064    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:01:08.796131    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:01:08.807702    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:01:08.807773    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:01:08.818398    4369 logs.go:276] 0 containers: []
	W0806 01:01:08.818410    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:01:08.818464    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:01:08.829403    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:01:08.829421    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:01:08.829426    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:01:08.848544    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:01:08.848554    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:01:08.860830    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:01:08.860841    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:01:08.872555    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:01:08.872569    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:01:08.912434    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:01:08.912449    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:01:08.924236    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:01:08.924248    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:01:08.936327    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:01:08.936338    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:01:08.948676    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:01:08.948700    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:01:08.963763    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:01:08.963773    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:01:08.981599    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:01:08.981609    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:01:09.006156    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:01:09.006173    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:01:09.020636    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:01:09.020646    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:01:09.032285    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:01:09.032296    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:01:09.066153    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:01:09.066163    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:01:09.079593    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:01:09.079604    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:01:14.910397    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:14.910443    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:11.586713    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:19.911262    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:19.911297    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:16.588111    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:16.588309    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:01:16.614846    4369 logs.go:276] 1 containers: [0ecb709eae60]
	I0806 01:01:16.614947    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:01:16.631089    4369 logs.go:276] 1 containers: [886dd9753609]
	I0806 01:01:16.631164    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:01:16.644294    4369 logs.go:276] 4 containers: [bb9d35fbe073 dbfa4e1e9e6d e7dedf60b7d2 c08c8ebaf711]
	I0806 01:01:16.644377    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:01:16.655445    4369 logs.go:276] 1 containers: [3145a8754ef7]
	I0806 01:01:16.655516    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:01:16.665696    4369 logs.go:276] 1 containers: [880c527f21d1]
	I0806 01:01:16.665765    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:01:16.676082    4369 logs.go:276] 1 containers: [fea065534c3d]
	I0806 01:01:16.676149    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:01:16.686755    4369 logs.go:276] 0 containers: []
	W0806 01:01:16.686767    4369 logs.go:278] No container was found matching "kindnet"
	I0806 01:01:16.686828    4369 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:01:16.697121    4369 logs.go:276] 1 containers: [060e7b2ec0dc]
	I0806 01:01:16.697142    4369 logs.go:123] Gathering logs for Docker ...
	I0806 01:01:16.697147    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:01:16.719751    4369 logs.go:123] Gathering logs for dmesg ...
	I0806 01:01:16.719759    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:01:16.724638    4369 logs.go:123] Gathering logs for etcd [886dd9753609] ...
	I0806 01:01:16.724646    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 886dd9753609"
	I0806 01:01:16.738937    4369 logs.go:123] Gathering logs for coredns [c08c8ebaf711] ...
	I0806 01:01:16.738947    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c08c8ebaf711"
	I0806 01:01:16.750718    4369 logs.go:123] Gathering logs for kube-proxy [880c527f21d1] ...
	I0806 01:01:16.750730    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 880c527f21d1"
	I0806 01:01:16.770981    4369 logs.go:123] Gathering logs for kube-controller-manager [fea065534c3d] ...
	I0806 01:01:16.770996    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fea065534c3d"
	I0806 01:01:16.788220    4369 logs.go:123] Gathering logs for kubelet ...
	I0806 01:01:16.788230    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:01:16.821461    4369 logs.go:123] Gathering logs for kube-apiserver [0ecb709eae60] ...
	I0806 01:01:16.821470    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ecb709eae60"
	I0806 01:01:16.836034    4369 logs.go:123] Gathering logs for coredns [dbfa4e1e9e6d] ...
	I0806 01:01:16.836044    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbfa4e1e9e6d"
	I0806 01:01:16.847727    4369 logs.go:123] Gathering logs for coredns [e7dedf60b7d2] ...
	I0806 01:01:16.847738    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7dedf60b7d2"
	I0806 01:01:16.860092    4369 logs.go:123] Gathering logs for kube-scheduler [3145a8754ef7] ...
	I0806 01:01:16.860103    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3145a8754ef7"
	I0806 01:01:16.874931    4369 logs.go:123] Gathering logs for storage-provisioner [060e7b2ec0dc] ...
	I0806 01:01:16.874945    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 060e7b2ec0dc"
	I0806 01:01:16.886221    4369 logs.go:123] Gathering logs for container status ...
	I0806 01:01:16.886231    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:01:16.897911    4369 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:01:16.897921    4369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:01:16.935243    4369 logs.go:123] Gathering logs for coredns [bb9d35fbe073] ...
	I0806 01:01:16.935254    4369 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb9d35fbe073"
	I0806 01:01:19.451611    4369 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:24.453552    4369 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:24.458247    4369 out.go:177] 
	W0806 01:01:24.462153    4369 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0806 01:01:24.462162    4369 out.go:239] * 
	W0806 01:01:24.462900    4369 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:01:24.473122    4369 out.go:177] 
	I0806 01:01:24.912268    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:24.912287    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0806 01:01:25.280208    4539 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0806 01:01:25.284406    4539 out.go:177] * Enabled addons: storage-provisioner
	I0806 01:01:25.293311    4539 addons.go:510] duration metric: took 30.478929875s for enable addons: enabled=[storage-provisioner]
	I0806 01:01:29.913603    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:29.913630    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:34.915130    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:34.915162    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:39.917271    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:39.917319    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-08-06 07:52:33 UTC, ends at Tue 2024-08-06 08:01:40 UTC. --
	Aug 06 08:01:26 running-upgrade-217000 dockerd[3178]: time="2024-08-06T08:01:26.475947419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:01:26 running-upgrade-217000 dockerd[3178]: time="2024-08-06T08:01:26.475993084Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6785f7d6eed05a717b4c13619cd53c0a0d5aa97d8e6526b4b25fbdbaa231ad66 pid=19238 runtime=io.containerd.runc.v2
	Aug 06 08:01:26 running-upgrade-217000 dockerd[3178]: time="2024-08-06T08:01:26.481477527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:01:26 running-upgrade-217000 dockerd[3178]: time="2024-08-06T08:01:26.481505026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:01:26 running-upgrade-217000 dockerd[3178]: time="2024-08-06T08:01:26.481511484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:01:26 running-upgrade-217000 dockerd[3178]: time="2024-08-06T08:01:26.481575400Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5d20e5e81c2af93ec939e79144b56342fa6fbf47de4c3332bcafd1e08374ef76 pid=19264 runtime=io.containerd.runc.v2
	Aug 06 08:01:27 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:27Z" level=error msg="ContainerStats resp: {0x40006d9d40 linux}"
	Aug 06 08:01:27 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:27Z" level=error msg="ContainerStats resp: {0x400015e680 linux}"
	Aug 06 08:01:27 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:27Z" level=error msg="ContainerStats resp: {0x400090e140 linux}"
	Aug 06 08:01:27 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:27Z" level=error msg="ContainerStats resp: {0x400015f700 linux}"
	Aug 06 08:01:27 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:27Z" level=error msg="ContainerStats resp: {0x400090f4c0 linux}"
	Aug 06 08:01:27 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:27Z" level=error msg="ContainerStats resp: {0x4000a66380 linux}"
	Aug 06 08:01:27 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:27Z" level=error msg="ContainerStats resp: {0x400090f940 linux}"
	Aug 06 08:01:31 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:31Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 06 08:01:36 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:36Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 06 08:01:37 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:37Z" level=error msg="ContainerStats resp: {0x4000359e00 linux}"
	Aug 06 08:01:37 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:37Z" level=error msg="ContainerStats resp: {0x400015e100 linux}"
	Aug 06 08:01:38 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:38Z" level=error msg="ContainerStats resp: {0x400090e7c0 linux}"
	Aug 06 08:01:39 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:39Z" level=error msg="ContainerStats resp: {0x4000888500 linux}"
	Aug 06 08:01:39 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:39Z" level=error msg="ContainerStats resp: {0x40008886c0 linux}"
	Aug 06 08:01:39 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:39Z" level=error msg="ContainerStats resp: {0x400007e440 linux}"
	Aug 06 08:01:39 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:39Z" level=error msg="ContainerStats resp: {0x400007f540 linux}"
	Aug 06 08:01:39 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:39Z" level=error msg="ContainerStats resp: {0x400007fe80 linux}"
	Aug 06 08:01:39 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:39Z" level=error msg="ContainerStats resp: {0x4000889140 linux}"
	Aug 06 08:01:39 running-upgrade-217000 cri-dockerd[3020]: time="2024-08-06T08:01:39Z" level=error msg="ContainerStats resp: {0x40009e4ac0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5d20e5e81c2af       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   82c3fede38efc
	6785f7d6eed05       edaa71f2aee88       14 seconds ago      Running             coredns                   2                   31ed6d52932cd
	bb9d35fbe073d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   31ed6d52932cd
	dbfa4e1e9e6dc       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   82c3fede38efc
	060e7b2ec0dcc       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   8b479a06d1638
	880c527f21d1c       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   b7e8163d29d86
	886dd97536098       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   c3a0ebade1a2a
	0ecb709eae603       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   843790324e3fe
	fea065534c3da       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   c4a0f66a543e6
	3145a8754ef75       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   57aec0a7a38dc
	
	
	==> coredns [5d20e5e81c2a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6993272245280164516.5806123364856448554. HINFO: read udp 10.244.0.3:44557->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6993272245280164516.5806123364856448554. HINFO: read udp 10.244.0.3:59801->10.0.2.3:53: i/o timeout
	
	
	==> coredns [6785f7d6eed0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 294331477363217552.3957611019481258962. HINFO: read udp 10.244.0.2:39550->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 294331477363217552.3957611019481258962. HINFO: read udp 10.244.0.2:45957->10.0.2.3:53: i/o timeout
	
	
	==> coredns [bb9d35fbe073] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:33247->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:46281->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:50990->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:46691->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:57861->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:36137->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:35228->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:46690->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:34184->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4220125864460211744.5026482121226885805. HINFO: read udp 10.244.0.2:43657->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dbfa4e1e9e6d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:43496->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:34502->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:35371->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:53756->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:52324->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:60931->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:38699->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:33689->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:57983->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5233062436636006911.1208263646976230832. HINFO: read udp 10.244.0.3:50908->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-217000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-217000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=running-upgrade-217000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_57_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:57:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-217000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 08:01:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:57:23 +0000   Tue, 06 Aug 2024 07:57:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:57:23 +0000   Tue, 06 Aug 2024 07:57:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:57:23 +0000   Tue, 06 Aug 2024 07:57:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:57:23 +0000   Tue, 06 Aug 2024 07:57:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-217000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 eef917ae50b844e39c1a0230db96a659
	  System UUID:                eef917ae50b844e39c1a0230db96a659
	  Boot ID:                    95410a9b-54a3-421f-b585-29b934aef144
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-ktwn5                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m2s
	  kube-system                 coredns-6d4b75cb6d-pf4bn                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m2s
	  kube-system                 etcd-running-upgrade-217000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-217000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-217000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-24n4n                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-running-upgrade-217000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-217000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-217000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-217000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-217000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m3s   node-controller  Node running-upgrade-217000 event: Registered Node running-upgrade-217000 in Controller
	
	
	==> dmesg <==
	[  +1.644535] systemd-fstab-generator[879]: Ignoring "noauto" for root device
	[  +0.084948] systemd-fstab-generator[890]: Ignoring "noauto" for root device
	[  +0.075460] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +1.138104] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.087479] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.077359] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.712649] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[  +9.661882] systemd-fstab-generator[1929]: Ignoring "noauto" for root device
	[Aug 6 07:53] systemd-fstab-generator[2206]: Ignoring "noauto" for root device
	[  +0.215081] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +0.098344] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +0.083598] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[  +2.505816] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.162091] systemd-fstab-generator[2977]: Ignoring "noauto" for root device
	[  +0.080715] systemd-fstab-generator[2988]: Ignoring "noauto" for root device
	[  +0.076461] systemd-fstab-generator[2999]: Ignoring "noauto" for root device
	[  +0.092436] systemd-fstab-generator[3013]: Ignoring "noauto" for root device
	[  +2.268200] systemd-fstab-generator[3165]: Ignoring "noauto" for root device
	[  +3.553540] systemd-fstab-generator[3673]: Ignoring "noauto" for root device
	[  +1.644506] systemd-fstab-generator[4292]: Ignoring "noauto" for root device
	[ +19.296432] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 6 07:57] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.654305] systemd-fstab-generator[12323]: Ignoring "noauto" for root device
	[  +5.643454] systemd-fstab-generator[12918]: Ignoring "noauto" for root device
	[  +0.460363] systemd-fstab-generator[13048]: Ignoring "noauto" for root device
	
	
	==> etcd [886dd9753609] <==
	{"level":"info","ts":"2024-08-06T07:57:19.279Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:57:19.279Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-06T07:57:19.279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-06T07:57:19.279Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-06T07:57:19.279Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-06T07:57:19.279Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-06T07:57:19.279Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-06T07:57:20.016Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-217000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:57:20.017Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:57:20.018Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-06T07:57:20.018Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:01:40 up 9 min,  0 users,  load average: 0.46, 0.44, 0.28
	Linux running-upgrade-217000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [0ecb709eae60] <==
	I0806 07:57:21.184437       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0806 07:57:21.198324       1 controller.go:611] quota admission added evaluator for: namespaces
	I0806 07:57:21.226187       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0806 07:57:21.246680       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 07:57:21.248867       1 cache.go:39] Caches are synced for autoregister controller
	I0806 07:57:21.248950       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0806 07:57:21.248987       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 07:57:21.249183       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:57:21.973890       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0806 07:57:22.152381       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:57:22.153530       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:57:22.153540       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:57:22.282960       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:57:22.299030       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:57:22.403403       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0806 07:57:22.405949       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0806 07:57:22.406343       1 controller.go:611] quota admission added evaluator for: endpoints
	I0806 07:57:22.407783       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:57:23.280258       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0806 07:57:23.644860       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0806 07:57:23.647878       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0806 07:57:23.656670       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0806 07:57:37.588958       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0806 07:57:38.138410       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:57:39.327715       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [fea065534c3d] <==
	I0806 07:57:37.337611       1 shared_informer.go:262] Caches are synced for PV protection
	I0806 07:57:37.337661       1 shared_informer.go:262] Caches are synced for attach detach
	I0806 07:57:37.337763       1 shared_informer.go:262] Caches are synced for GC
	I0806 07:57:37.337779       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0806 07:57:37.337788       1 shared_informer.go:262] Caches are synced for cronjob
	I0806 07:57:37.337821       1 shared_informer.go:262] Caches are synced for ephemeral
	I0806 07:57:37.337840       1 shared_informer.go:262] Caches are synced for persistent volume
	I0806 07:57:37.341483       1 shared_informer.go:262] Caches are synced for namespace
	I0806 07:57:37.342591       1 shared_informer.go:262] Caches are synced for daemon sets
	I0806 07:57:37.381694       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0806 07:57:37.381755       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0806 07:57:37.381822       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0806 07:57:37.382809       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0806 07:57:37.435421       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0806 07:57:37.437410       1 shared_informer.go:262] Caches are synced for crt configmap
	I0806 07:57:37.437472       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0806 07:57:37.538529       1 shared_informer.go:262] Caches are synced for resource quota
	I0806 07:57:37.541423       1 shared_informer.go:262] Caches are synced for resource quota
	I0806 07:57:37.590125       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0806 07:57:37.956273       1 shared_informer.go:262] Caches are synced for garbage collector
	I0806 07:57:37.987550       1 shared_informer.go:262] Caches are synced for garbage collector
	I0806 07:57:37.987557       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0806 07:57:38.140740       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-24n4n"
	I0806 07:57:38.338861       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-pf4bn"
	I0806 07:57:38.341702       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-ktwn5"
	
	
	==> kube-proxy [880c527f21d1] <==
	I0806 07:57:39.294595       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0806 07:57:39.294672       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0806 07:57:39.294722       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0806 07:57:39.322692       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0806 07:57:39.322716       1 server_others.go:206] "Using iptables Proxier"
	I0806 07:57:39.322731       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0806 07:57:39.322847       1 server.go:661] "Version info" version="v1.24.1"
	I0806 07:57:39.322851       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:57:39.326597       1 config.go:317] "Starting service config controller"
	I0806 07:57:39.326605       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0806 07:57:39.326616       1 config.go:226] "Starting endpoint slice config controller"
	I0806 07:57:39.326617       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0806 07:57:39.327050       1 config.go:444] "Starting node config controller"
	I0806 07:57:39.327052       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0806 07:57:39.426672       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0806 07:57:39.426702       1 shared_informer.go:262] Caches are synced for service config
	I0806 07:57:39.427478       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [3145a8754ef7] <==
	W0806 07:57:21.199477       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 07:57:21.199494       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0806 07:57:21.199519       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:57:21.199551       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:57:21.199586       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:57:21.199605       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:57:21.199674       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:57:21.199677       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:57:21.199712       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0806 07:57:21.199719       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0806 07:57:21.199758       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:57:21.199768       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:57:21.199800       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:57:21.199812       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:57:22.037847       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:57:22.037950       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:57:22.085733       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:57:22.085750       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:57:22.181653       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0806 07:57:22.181745       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0806 07:57:22.182093       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:57:22.182145       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:57:22.212683       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0806 07:57:22.212754       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0806 07:57:22.787992       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-08-06 07:52:33 UTC, ends at Tue 2024-08-06 08:01:41 UTC. --
	Aug 06 07:57:25 running-upgrade-217000 kubelet[12924]: E0806 07:57:25.883256   12924 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-217000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-217000"
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: I0806 07:57:37.293648   12924 topology_manager.go:200] "Topology Admit Handler"
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: I0806 07:57:37.329648   12924 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: I0806 07:57:37.330030   12924 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: I0806 07:57:37.430666   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87gz\" (UniqueName: \"kubernetes.io/projected/51d8a93d-1b2b-4584-b8e6-701488dff6f4-kube-api-access-k87gz\") pod \"storage-provisioner\" (UID: \"51d8a93d-1b2b-4584-b8e6-701488dff6f4\") " pod="kube-system/storage-provisioner"
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: I0806 07:57:37.430695   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/51d8a93d-1b2b-4584-b8e6-701488dff6f4-tmp\") pod \"storage-provisioner\" (UID: \"51d8a93d-1b2b-4584-b8e6-701488dff6f4\") " pod="kube-system/storage-provisioner"
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: E0806 07:57:37.534428   12924 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: E0806 07:57:37.534469   12924 projected.go:192] Error preparing data for projected volume kube-api-access-k87gz for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 06 07:57:37 running-upgrade-217000 kubelet[12924]: E0806 07:57:37.534506   12924 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/51d8a93d-1b2b-4584-b8e6-701488dff6f4-kube-api-access-k87gz podName:51d8a93d-1b2b-4584-b8e6-701488dff6f4 nodeName:}" failed. No retries permitted until 2024-08-06 07:57:38.034492757 +0000 UTC m=+14.400219865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k87gz" (UniqueName: "kubernetes.io/projected/51d8a93d-1b2b-4584-b8e6-701488dff6f4-kube-api-access-k87gz") pod "storage-provisioner" (UID: "51d8a93d-1b2b-4584-b8e6-701488dff6f4") : configmap "kube-root-ca.crt" not found
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: E0806 07:57:38.036980   12924 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: E0806 07:57:38.037003   12924 projected.go:192] Error preparing data for projected volume kube-api-access-k87gz for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: E0806 07:57:38.037034   12924 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/51d8a93d-1b2b-4584-b8e6-701488dff6f4-kube-api-access-k87gz podName:51d8a93d-1b2b-4584-b8e6-701488dff6f4 nodeName:}" failed. No retries permitted until 2024-08-06 07:57:39.03702387 +0000 UTC m=+15.402751020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-k87gz" (UniqueName: "kubernetes.io/projected/51d8a93d-1b2b-4584-b8e6-701488dff6f4-kube-api-access-k87gz") pod "storage-provisioner" (UID: "51d8a93d-1b2b-4584-b8e6-701488dff6f4") : configmap "kube-root-ca.crt" not found
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.142314   12924 topology_manager.go:200] "Topology Admit Handler"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.340057   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a31b6e42-cc7f-4bcd-b17f-ef32593a5630-xtables-lock\") pod \"kube-proxy-24n4n\" (UID: \"a31b6e42-cc7f-4bcd-b17f-ef32593a5630\") " pod="kube-system/kube-proxy-24n4n"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.340086   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a31b6e42-cc7f-4bcd-b17f-ef32593a5630-lib-modules\") pod \"kube-proxy-24n4n\" (UID: \"a31b6e42-cc7f-4bcd-b17f-ef32593a5630\") " pod="kube-system/kube-proxy-24n4n"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.340110   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a31b6e42-cc7f-4bcd-b17f-ef32593a5630-kube-proxy\") pod \"kube-proxy-24n4n\" (UID: \"a31b6e42-cc7f-4bcd-b17f-ef32593a5630\") " pod="kube-system/kube-proxy-24n4n"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.340121   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rh7r\" (UniqueName: \"kubernetes.io/projected/a31b6e42-cc7f-4bcd-b17f-ef32593a5630-kube-api-access-5rh7r\") pod \"kube-proxy-24n4n\" (UID: \"a31b6e42-cc7f-4bcd-b17f-ef32593a5630\") " pod="kube-system/kube-proxy-24n4n"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.343839   12924 topology_manager.go:200] "Topology Admit Handler"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.344718   12924 topology_manager.go:200] "Topology Admit Handler"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.440556   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f1763d-d2ab-40e5-acb4-39a33e6f9267-config-volume\") pod \"coredns-6d4b75cb6d-pf4bn\" (UID: \"43f1763d-d2ab-40e5-acb4-39a33e6f9267\") " pod="kube-system/coredns-6d4b75cb6d-pf4bn"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.541615   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jft8\" (UniqueName: \"kubernetes.io/projected/43f1763d-d2ab-40e5-acb4-39a33e6f9267-kube-api-access-5jft8\") pod \"coredns-6d4b75cb6d-pf4bn\" (UID: \"43f1763d-d2ab-40e5-acb4-39a33e6f9267\") " pod="kube-system/coredns-6d4b75cb6d-pf4bn"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.541675   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed0e83f2-5a85-464a-9629-529183294a18-config-volume\") pod \"coredns-6d4b75cb6d-ktwn5\" (UID: \"ed0e83f2-5a85-464a-9629-529183294a18\") " pod="kube-system/coredns-6d4b75cb6d-ktwn5"
	Aug 06 07:57:38 running-upgrade-217000 kubelet[12924]: I0806 07:57:38.541701   12924 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpt4v\" (UniqueName: \"kubernetes.io/projected/ed0e83f2-5a85-464a-9629-529183294a18-kube-api-access-mpt4v\") pod \"coredns-6d4b75cb6d-ktwn5\" (UID: \"ed0e83f2-5a85-464a-9629-529183294a18\") " pod="kube-system/coredns-6d4b75cb6d-ktwn5"
	Aug 06 08:01:27 running-upgrade-217000 kubelet[12924]: I0806 08:01:27.248771   12924 scope.go:110] "RemoveContainer" containerID="e7dedf60b7d234020cad03a1b4ce107cf4662d25987549c651466447a796e15b"
	Aug 06 08:01:27 running-upgrade-217000 kubelet[12924]: I0806 08:01:27.269071   12924 scope.go:110] "RemoveContainer" containerID="c08c8ebaf71190767d001b6f31460a2ad0e198e9d08b663348fdae88f8451e46"
	
	
	==> storage-provisioner [060e7b2ec0dc] <==
	I0806 07:57:39.416537       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:57:39.421519       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:57:39.421870       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 07:57:39.425938       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 07:57:39.426911       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-217000_72773657-e094-4ce1-9296-992144194fca!
	I0806 07:57:39.427031       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c64b7806-26c0-4f98-89f6-e5aec4d1021e", APIVersion:"v1", ResourceVersion:"365", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-217000_72773657-e094-4ce1-9296-992144194fca became leader
	I0806 07:57:39.527213       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-217000_72773657-e094-4ce1-9296-992144194fca!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-217000 -n running-upgrade-217000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-217000 -n running-upgrade-217000: exit status 2 (15.740862792s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-217000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-217000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-217000
--- FAIL: TestRunningBinaryUpgrade (592.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-400000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-400000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.888824042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-400000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-400000" primary control-plane node in "kubernetes-upgrade-400000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-400000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:55:04.149971    4452 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:55:04.150102    4452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:55:04.150104    4452 out.go:304] Setting ErrFile to fd 2...
	I0806 00:55:04.150107    4452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:55:04.150244    4452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:55:04.151374    4452 out.go:298] Setting JSON to false
	I0806 00:55:04.167900    4452 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3272,"bootTime":1722927632,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:55:04.167969    4452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:55:04.174684    4452 out.go:177] * [kubernetes-upgrade-400000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:55:04.181565    4452 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:55:04.181629    4452 notify.go:220] Checking for updates...
	I0806 00:55:04.188478    4452 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:55:04.191516    4452 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:55:04.194471    4452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:55:04.197494    4452 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:55:04.200509    4452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:55:04.203913    4452 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:04.203981    4452 config.go:182] Loaded profile config "running-upgrade-217000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:55:04.204031    4452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:55:04.208508    4452 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 00:55:04.211430    4452 start.go:297] selected driver: qemu2
	I0806 00:55:04.211436    4452 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:55:04.211441    4452 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:55:04.213866    4452 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:55:04.216432    4452 out.go:177] * Automatically selected the socket_vmnet network
	I0806 00:55:04.219491    4452 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:55:04.219517    4452 cni.go:84] Creating CNI manager for ""
	I0806 00:55:04.219524    4452 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 00:55:04.219556    4452 start.go:340] cluster config:
	{Name:kubernetes-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:55:04.223184    4452 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:55:04.230448    4452 out.go:177] * Starting "kubernetes-upgrade-400000" primary control-plane node in "kubernetes-upgrade-400000" cluster
	I0806 00:55:04.234327    4452 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:55:04.234345    4452 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 00:55:04.234360    4452 cache.go:56] Caching tarball of preloaded images
	I0806 00:55:04.234431    4452 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:55:04.234443    4452 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0806 00:55:04.234496    4452 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/kubernetes-upgrade-400000/config.json ...
	I0806 00:55:04.234513    4452 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/kubernetes-upgrade-400000/config.json: {Name:mkf7f7d989cd4afed7588f50518d1672454f80fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:04.234861    4452 start.go:360] acquireMachinesLock for kubernetes-upgrade-400000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:55:04.234894    4452 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "kubernetes-upgrade-400000"
	I0806 00:55:04.234904    4452 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:55:04.234925    4452 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:55:04.243443    4452 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:55:04.259667    4452 start.go:159] libmachine.API.Create for "kubernetes-upgrade-400000" (driver="qemu2")
	I0806 00:55:04.259691    4452 client.go:168] LocalClient.Create starting
	I0806 00:55:04.259744    4452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:55:04.259784    4452 main.go:141] libmachine: Decoding PEM data...
	I0806 00:55:04.259793    4452 main.go:141] libmachine: Parsing certificate...
	I0806 00:55:04.259837    4452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:55:04.259868    4452 main.go:141] libmachine: Decoding PEM data...
	I0806 00:55:04.259877    4452 main.go:141] libmachine: Parsing certificate...
	I0806 00:55:04.260346    4452 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:55:04.420791    4452 main.go:141] libmachine: Creating SSH key...
	I0806 00:55:04.570001    4452 main.go:141] libmachine: Creating Disk image...
	I0806 00:55:04.570009    4452 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:55:04.570262    4452 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:04.579914    4452 main.go:141] libmachine: STDOUT: 
	I0806 00:55:04.579935    4452 main.go:141] libmachine: STDERR: 
	I0806 00:55:04.579988    4452 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2 +20000M
	I0806 00:55:04.588056    4452 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:55:04.588083    4452 main.go:141] libmachine: STDERR: 
	I0806 00:55:04.588100    4452 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:04.588105    4452 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:55:04.588116    4452 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:55:04.588143    4452 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:62:98:5c:f7:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:04.589831    4452 main.go:141] libmachine: STDOUT: 
	I0806 00:55:04.589844    4452 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:55:04.589867    4452 client.go:171] duration metric: took 330.174084ms to LocalClient.Create
	I0806 00:55:06.592040    4452 start.go:128] duration metric: took 2.35709775s to createHost
	I0806 00:55:06.592113    4452 start.go:83] releasing machines lock for "kubernetes-upgrade-400000", held for 2.357226625s
	W0806 00:55:06.592186    4452 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:55:06.604533    4452 out.go:177] * Deleting "kubernetes-upgrade-400000" in qemu2 ...
	W0806 00:55:06.632001    4452 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:55:06.632034    4452 start.go:729] Will try again in 5 seconds ...
	I0806 00:55:11.634170    4452 start.go:360] acquireMachinesLock for kubernetes-upgrade-400000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:55:11.634734    4452 start.go:364] duration metric: took 479.541µs to acquireMachinesLock for "kubernetes-upgrade-400000"
	I0806 00:55:11.634878    4452 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:55:11.635171    4452 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 00:55:11.640799    4452 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:55:11.692947    4452 start.go:159] libmachine.API.Create for "kubernetes-upgrade-400000" (driver="qemu2")
	I0806 00:55:11.692998    4452 client.go:168] LocalClient.Create starting
	I0806 00:55:11.693115    4452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 00:55:11.693181    4452 main.go:141] libmachine: Decoding PEM data...
	I0806 00:55:11.693197    4452 main.go:141] libmachine: Parsing certificate...
	I0806 00:55:11.693257    4452 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 00:55:11.693302    4452 main.go:141] libmachine: Decoding PEM data...
	I0806 00:55:11.693340    4452 main.go:141] libmachine: Parsing certificate...
	I0806 00:55:11.693890    4452 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 00:55:11.863508    4452 main.go:141] libmachine: Creating SSH key...
	I0806 00:55:11.941053    4452 main.go:141] libmachine: Creating Disk image...
	I0806 00:55:11.941060    4452 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 00:55:11.941298    4452 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:11.950942    4452 main.go:141] libmachine: STDOUT: 
	I0806 00:55:11.950961    4452 main.go:141] libmachine: STDERR: 
	I0806 00:55:11.951014    4452 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2 +20000M
	I0806 00:55:11.958899    4452 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 00:55:11.958922    4452 main.go:141] libmachine: STDERR: 
	I0806 00:55:11.958933    4452 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:11.958938    4452 main.go:141] libmachine: Starting QEMU VM...
	I0806 00:55:11.958945    4452 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:55:11.958973    4452 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a7:a7:0a:db:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:11.960740    4452 main.go:141] libmachine: STDOUT: 
	I0806 00:55:11.960754    4452 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:55:11.960774    4452 client.go:171] duration metric: took 267.772875ms to LocalClient.Create
	I0806 00:55:13.962973    4452 start.go:128] duration metric: took 2.327779084s to createHost
	I0806 00:55:13.963047    4452 start.go:83] releasing machines lock for "kubernetes-upgrade-400000", held for 2.3283025s
	W0806 00:55:13.963514    4452 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:55:13.977187    4452 out.go:177] 
	W0806 00:55:13.981265    4452 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:55:13.981299    4452 out.go:239] * 
	* 
	W0806 00:55:13.984012    4452 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:55:13.995090    4452 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-400000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-400000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-400000: (3.688153625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-400000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-400000 status --format={{.Host}}: exit status 7 (39.255208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-400000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-400000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.175744042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-400000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-400000" primary control-plane node in "kubernetes-upgrade-400000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-400000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-400000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:55:17.768435    4493 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:55:17.768568    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:55:17.768572    4493 out.go:304] Setting ErrFile to fd 2...
	I0806 00:55:17.768574    4493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:55:17.768714    4493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:55:17.769833    4493 out.go:298] Setting JSON to false
	I0806 00:55:17.786533    4493 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3285,"bootTime":1722927632,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:55:17.786613    4493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:55:17.790730    4493 out.go:177] * [kubernetes-upgrade-400000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:55:17.796590    4493 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:55:17.796660    4493 notify.go:220] Checking for updates...
	I0806 00:55:17.803544    4493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:55:17.806579    4493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:55:17.809631    4493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:55:17.812554    4493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:55:17.815554    4493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:55:17.818924    4493 config.go:182] Loaded profile config "kubernetes-upgrade-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0806 00:55:17.819190    4493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:55:17.822577    4493 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:55:17.829748    4493 start.go:297] selected driver: qemu2
	I0806 00:55:17.829758    4493 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:55:17.829829    4493 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:55:17.831950    4493 cni.go:84] Creating CNI manager for ""
	I0806 00:55:17.831966    4493 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:55:17.832006    4493 start.go:340] cluster config:
	{Name:kubernetes-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-400000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:55:17.835130    4493 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:55:17.842566    4493 out.go:177] * Starting "kubernetes-upgrade-400000" primary control-plane node in "kubernetes-upgrade-400000" cluster
	I0806 00:55:17.846638    4493 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 00:55:17.846655    4493 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 00:55:17.846670    4493 cache.go:56] Caching tarball of preloaded images
	I0806 00:55:17.846738    4493 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:55:17.846743    4493 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 00:55:17.846805    4493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/kubernetes-upgrade-400000/config.json ...
	I0806 00:55:17.847301    4493 start.go:360] acquireMachinesLock for kubernetes-upgrade-400000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:55:17.847341    4493 start.go:364] duration metric: took 34.25µs to acquireMachinesLock for "kubernetes-upgrade-400000"
	I0806 00:55:17.847349    4493 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:55:17.847355    4493 fix.go:54] fixHost starting: 
	I0806 00:55:17.847460    4493 fix.go:112] recreateIfNeeded on kubernetes-upgrade-400000: state=Stopped err=<nil>
	W0806 00:55:17.847469    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:55:17.855648    4493 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-400000" ...
	I0806 00:55:17.859597    4493 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:55:17.859626    4493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a7:a7:0a:db:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:17.861429    4493 main.go:141] libmachine: STDOUT: 
	I0806 00:55:17.861446    4493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:55:17.861472    4493 fix.go:56] duration metric: took 14.118208ms for fixHost
	I0806 00:55:17.861475    4493 start.go:83] releasing machines lock for "kubernetes-upgrade-400000", held for 14.130958ms
	W0806 00:55:17.861481    4493 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:55:17.861519    4493 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:55:17.861524    4493 start.go:729] Will try again in 5 seconds ...
	I0806 00:55:22.863799    4493 start.go:360] acquireMachinesLock for kubernetes-upgrade-400000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:55:22.864171    4493 start.go:364] duration metric: took 281.208µs to acquireMachinesLock for "kubernetes-upgrade-400000"
	I0806 00:55:22.864232    4493 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:55:22.864247    4493 fix.go:54] fixHost starting: 
	I0806 00:55:22.864734    4493 fix.go:112] recreateIfNeeded on kubernetes-upgrade-400000: state=Stopped err=<nil>
	W0806 00:55:22.864751    4493 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:55:22.870211    4493 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-400000" ...
	I0806 00:55:22.873178    4493 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:55:22.873398    4493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a7:a7:0a:db:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubernetes-upgrade-400000/disk.qcow2
	I0806 00:55:22.881152    4493 main.go:141] libmachine: STDOUT: 
	I0806 00:55:22.881203    4493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 00:55:22.881282    4493 fix.go:56] duration metric: took 17.037708ms for fixHost
	I0806 00:55:22.881293    4493 start.go:83] releasing machines lock for "kubernetes-upgrade-400000", held for 17.107208ms
	W0806 00:55:22.881422    4493 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-400000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-400000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 00:55:22.891177    4493 out.go:177] 
	W0806 00:55:22.895219    4493 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 00:55:22.895236    4493 out.go:239] * 
	* 
	W0806 00:55:22.897061    4493 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:55:22.905107    4493 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-400000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-400000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-400000 version --output=json: exit status 1 (59.446875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-400000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-06 00:55:22.977847 -0700 PDT m=+3070.280794376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-400000 -n kubernetes-upgrade-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-400000 -n kubernetes-upgrade-400000: exit status 7 (32.4175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-400000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-400000
--- FAIL: TestKubernetesUpgrade (18.96s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.75s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19370
- KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3269679516/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.75s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.24s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19370
- KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current927391204/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (571.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2865183701 start -p stopped-upgrade-180000 --memory=2200 --vm-driver=qemu2 
E0806 00:55:48.423053    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2865183701 start -p stopped-upgrade-180000 --memory=2200 --vm-driver=qemu2 : (38.965971625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2865183701 -p stopped-upgrade-180000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2865183701 -p stopped-upgrade-180000 stop: (12.12659625s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0806 00:58:35.458830    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 01:00:48.420842    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.748939708s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-180000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-180000" primary control-plane node in "stopped-upgrade-180000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-180000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:56:15.218468    4539 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:56:15.218653    4539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:56:15.218658    4539 out.go:304] Setting ErrFile to fd 2...
	I0806 00:56:15.218661    4539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:56:15.218812    4539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:56:15.219965    4539 out.go:298] Setting JSON to false
	I0806 00:56:15.239619    4539 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3343,"bootTime":1722927632,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:56:15.239698    4539 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:56:15.244024    4539 out.go:177] * [stopped-upgrade-180000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:56:15.250894    4539 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:56:15.250955    4539 notify.go:220] Checking for updates...
	I0806 00:56:15.258994    4539 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:56:15.262129    4539 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:56:15.265007    4539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:56:15.267975    4539 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:56:15.271045    4539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:56:15.274236    4539 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:56:15.277935    4539 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 00:56:15.280927    4539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:56:15.284956    4539 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:56:15.290897    4539 start.go:297] selected driver: qemu2
	I0806 00:56:15.290903    4539 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:56:15.290964    4539 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:56:15.293626    4539 cni.go:84] Creating CNI manager for ""
	I0806 00:56:15.293645    4539 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:56:15.293668    4539 start.go:340] cluster config:
	{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:56:15.293717    4539 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:56:15.300973    4539 out.go:177] * Starting "stopped-upgrade-180000" primary control-plane node in "stopped-upgrade-180000" cluster
	I0806 00:56:15.304959    4539 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0806 00:56:15.304977    4539 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0806 00:56:15.304986    4539 cache.go:56] Caching tarball of preloaded images
	I0806 00:56:15.305047    4539 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 00:56:15.305052    4539 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0806 00:56:15.305110    4539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0806 00:56:15.305621    4539 start.go:360] acquireMachinesLock for stopped-upgrade-180000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:56:15.305651    4539 start.go:364] duration metric: took 23.791µs to acquireMachinesLock for "stopped-upgrade-180000"
	I0806 00:56:15.305659    4539 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:56:15.305665    4539 fix.go:54] fixHost starting: 
	I0806 00:56:15.305775    4539 fix.go:112] recreateIfNeeded on stopped-upgrade-180000: state=Stopped err=<nil>
	W0806 00:56:15.305783    4539 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:56:15.313974    4539 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-180000" ...
	I0806 00:56:15.318007    4539 qemu.go:418] Using hvf for hardware acceleration
	I0806 00:56:15.318069    4539 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50451-:22,hostfwd=tcp::50452-:2376,hostname=stopped-upgrade-180000 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/disk.qcow2
	I0806 00:56:15.365426    4539 main.go:141] libmachine: STDOUT: 
	I0806 00:56:15.365455    4539 main.go:141] libmachine: STDERR: 
	I0806 00:56:15.365460    4539 main.go:141] libmachine: Waiting for VM to start (ssh -p 50451 docker@127.0.0.1)...
	I0806 00:56:34.941183    4539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0806 00:56:34.941762    4539 machine.go:94] provisionDockerMachine start ...
	I0806 00:56:34.941874    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:34.942190    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:34.942201    4539 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:56:35.014364    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:56:35.014389    4539 buildroot.go:166] provisioning hostname "stopped-upgrade-180000"
	I0806 00:56:35.014460    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.014650    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.014661    4539 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-180000 && echo "stopped-upgrade-180000" | sudo tee /etc/hostname
	I0806 00:56:35.081315    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-180000
	
	I0806 00:56:35.081374    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.081501    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.081510    4539 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-180000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-180000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-180000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:56:35.143539    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:56:35.143550    4539 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-965/.minikube}
	I0806 00:56:35.143560    4539 buildroot.go:174] setting up certificates
	I0806 00:56:35.143565    4539 provision.go:84] configureAuth start
	I0806 00:56:35.143571    4539 provision.go:143] copyHostCerts
	I0806 00:56:35.143630    4539 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem, removing ...
	I0806 00:56:35.143637    4539 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem
	I0806 00:56:35.143743    4539 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/cert.pem (1123 bytes)
	I0806 00:56:35.143937    4539 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem, removing ...
	I0806 00:56:35.143940    4539 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem
	I0806 00:56:35.143986    4539 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/key.pem (1675 bytes)
	I0806 00:56:35.144102    4539 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem, removing ...
	I0806 00:56:35.144105    4539 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem
	I0806 00:56:35.144163    4539 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-965/.minikube/ca.pem (1082 bytes)
	I0806 00:56:35.144263    4539 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-180000 san=[127.0.0.1 localhost minikube stopped-upgrade-180000]
	I0806 00:56:35.259457    4539 provision.go:177] copyRemoteCerts
	I0806 00:56:35.259496    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:56:35.259503    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 00:56:35.290923    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:56:35.297640    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 00:56:35.304198    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 00:56:35.311282    4539 provision.go:87] duration metric: took 167.712667ms to configureAuth
	I0806 00:56:35.311293    4539 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:56:35.311412    4539 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 00:56:35.311448    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.311536    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.311541    4539 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:56:35.369566    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:56:35.369577    4539 buildroot.go:70] root file system type: tmpfs
	I0806 00:56:35.369623    4539 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:56:35.369670    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.369790    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.369822    4539 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:56:35.432225    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:56:35.432273    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.432387    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.432398    4539 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:56:35.771684    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:56:35.771697    4539 machine.go:97] duration metric: took 829.931458ms to provisionDockerMachine
	I0806 00:56:35.771704    4539 start.go:293] postStartSetup for "stopped-upgrade-180000" (driver="qemu2")
	I0806 00:56:35.771711    4539 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:56:35.771762    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:56:35.771771    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 00:56:35.805514    4539 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:56:35.806775    4539 info.go:137] Remote host: Buildroot 2021.02.12
	I0806 00:56:35.806784    4539 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/addons for local assets ...
	I0806 00:56:35.806876    4539 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-965/.minikube/files for local assets ...
	I0806 00:56:35.806969    4539 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem -> 14552.pem in /etc/ssl/certs
	I0806 00:56:35.807063    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:56:35.809569    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:56:35.816298    4539 start.go:296] duration metric: took 44.59ms for postStartSetup
	I0806 00:56:35.816312    4539 fix.go:56] duration metric: took 20.510780875s for fixHost
	I0806 00:56:35.816343    4539 main.go:141] libmachine: Using SSH client type: native
	I0806 00:56:35.816447    4539 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d12a10] 0x102d15270 <nil>  [] 0s} localhost 50451 <nil> <nil>}
	I0806 00:56:35.816453    4539 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:56:35.877596    4539 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722930995.749809796
	
	I0806 00:56:35.877606    4539 fix.go:216] guest clock: 1722930995.749809796
	I0806 00:56:35.877610    4539 fix.go:229] Guest: 2024-08-06 00:56:35.749809796 -0700 PDT Remote: 2024-08-06 00:56:35.816313 -0700 PDT m=+20.629547251 (delta=-66.503204ms)
	I0806 00:56:35.877621    4539 fix.go:200] guest clock delta is within tolerance: -66.503204ms
	I0806 00:56:35.877624    4539 start.go:83] releasing machines lock for "stopped-upgrade-180000", held for 20.572101708s
	I0806 00:56:35.877689    4539 ssh_runner.go:195] Run: cat /version.json
	I0806 00:56:35.877696    4539 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:56:35.877697    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 00:56:35.877714    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	W0806 00:56:35.878265    4539 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50578->127.0.0.1:50451: write: broken pipe
	I0806 00:56:35.878282    4539 retry.go:31] will retry after 344.28547ms: ssh: handshake failed: write tcp 127.0.0.1:50578->127.0.0.1:50451: write: broken pipe
	W0806 00:56:35.909050    4539 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0806 00:56:35.909097    4539 ssh_runner.go:195] Run: systemctl --version
	I0806 00:56:35.910780    4539 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:56:35.912428    4539 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:56:35.912457    4539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0806 00:56:35.915840    4539 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0806 00:56:35.920765    4539 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:56:35.920783    4539 start.go:495] detecting cgroup driver to use...
	I0806 00:56:35.920855    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:56:35.927806    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0806 00:56:35.931351    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:56:35.934365    4539 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:56:35.934392    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:56:35.937113    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:56:35.940237    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:56:35.943762    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:56:35.947051    4539 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:56:35.949880    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:56:35.952804    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:56:35.956040    4539 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:56:35.959311    4539 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:56:35.962189    4539 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:56:35.964829    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:36.044875    4539 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:56:36.051051    4539 start.go:495] detecting cgroup driver to use...
	I0806 00:56:36.051108    4539 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:56:36.057568    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:56:36.062855    4539 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:56:36.069097    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:56:36.074136    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:56:36.078567    4539 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:56:36.127722    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:56:36.133151    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:56:36.138885    4539 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:56:36.140148    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:56:36.143085    4539 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:56:36.147985    4539 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:56:36.210905    4539 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:56:36.273120    4539 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:56:36.273181    4539 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:56:36.278240    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:36.350446    4539 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:56:37.502489    4539 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152033083s)
	I0806 00:56:37.502543    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:56:37.508381    4539 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:56:37.515808    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:56:37.520451    4539 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:56:37.582733    4539 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:56:37.646368    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:37.710183    4539 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:56:37.715644    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:56:37.720153    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:37.787631    4539 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:56:37.827003    4539 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:56:37.827080    4539 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:56:37.829221    4539 start.go:563] Will wait 60s for crictl version
	I0806 00:56:37.829270    4539 ssh_runner.go:195] Run: which crictl
	I0806 00:56:37.831020    4539 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:56:37.845793    4539 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0806 00:56:37.845864    4539 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:56:37.861564    4539 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:56:37.883814    4539 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0806 00:56:37.883892    4539 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0806 00:56:37.885210    4539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:56:37.888666    4539 kubeadm.go:883] updating cluster {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0806 00:56:37.888716    4539 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0806 00:56:37.888756    4539 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:56:37.899597    4539 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:56:37.899606    4539 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0806 00:56:37.899657    4539 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:56:37.903218    4539 ssh_runner.go:195] Run: which lz4
	I0806 00:56:37.904463    4539 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0806 00:56:37.905793    4539 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:56:37.905802    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0806 00:56:38.874580    4539 docker.go:649] duration metric: took 970.153792ms to copy over tarball
	I0806 00:56:38.874645    4539 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:56:40.034418    4539 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159767458s)
	I0806 00:56:40.034432    4539 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:56:40.050565    4539 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:56:40.053849    4539 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0806 00:56:40.058904    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:40.127581    4539 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:56:41.702267    4539 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.574679084s)
	I0806 00:56:41.702376    4539 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:56:41.716733    4539 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:56:41.716742    4539 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0806 00:56:41.716753    4539 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 00:56:41.721522    4539 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:41.723094    4539 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:41.724403    4539 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:41.724411    4539 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:41.725915    4539 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:41.726000    4539 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:41.727232    4539 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0806 00:56:41.727198    4539 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:41.728005    4539 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:41.729077    4539 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:41.729607    4539 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:41.730142    4539 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0806 00:56:41.730967    4539 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:41.730991    4539 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:41.731820    4539 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:41.732471    4539 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.046339    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:42.056509    4539 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0806 00:56:42.056538    4539 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:42.056590    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0806 00:56:42.067164    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0806 00:56:42.082225    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:42.092065    4539 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0806 00:56:42.092090    4539 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:42.092136    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0806 00:56:42.102077    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0806 00:56:42.105270    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:42.114920    4539 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0806 00:56:42.114940    4539 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:42.114991    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0806 00:56:42.124776    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0806 00:56:42.129686    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0806 00:56:42.140008    4539 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0806 00:56:42.140027    4539 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0806 00:56:42.140078    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0806 00:56:42.150117    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0806 00:56:42.150238    4539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0806 00:56:42.152400    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0806 00:56:42.152411    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0806 00:56:42.159935    4539 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0806 00:56:42.159943    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0806 00:56:42.174313    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W0806 00:56:42.187574    4539 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0806 00:56:42.187709    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:42.196189    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0806 00:56:42.196213    4539 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0806 00:56:42.196231    4539 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:42.196288    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0806 00:56:42.211399    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.212886    4539 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0806 00:56:42.212902    4539 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:42.212927    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0806 00:56:42.216399    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0806 00:56:42.226805    4539 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0806 00:56:42.226814    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0806 00:56:42.226822    4539 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.226865    4539 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0806 00:56:42.226927    4539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0806 00:56:42.236468    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0806 00:56:42.236497    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0806 00:56:42.236557    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0806 00:56:42.236651    4539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0806 00:56:42.242859    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0806 00:56:42.242891    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0806 00:56:42.308071    4539 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0806 00:56:42.308087    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0806 00:56:42.399159    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0806 00:56:42.405274    4539 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0806 00:56:42.405391    4539 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:42.440969    4539 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0806 00:56:42.440994    4539 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:42.441049    4539 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:56:42.487729    4539 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0806 00:56:42.487850    4539 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0806 00:56:42.507270    4539 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0806 00:56:42.507303    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0806 00:56:42.552648    4539 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0806 00:56:42.552664    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0806 00:56:42.698367    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0806 00:56:42.698389    4539 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0806 00:56:42.698397    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0806 00:56:42.931665    4539 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0806 00:56:42.931703    4539 cache_images.go:92] duration metric: took 1.214951s to LoadCachedImages
	W0806 00:56:42.931743    4539 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0806 00:56:42.931749    4539 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0806 00:56:42.931805    4539 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-180000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:56:42.931873    4539 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:56:42.947330    4539 cni.go:84] Creating CNI manager for ""
	I0806 00:56:42.947344    4539 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:56:42.947348    4539 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:56:42.947356    4539 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-180000 NodeName:stopped-upgrade-180000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:56:42.947425    4539 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-180000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:56:42.947492    4539 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0806 00:56:42.951095    4539 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:56:42.951136    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:56:42.953992    4539 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0806 00:56:42.958503    4539 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:56:42.963648    4539 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0806 00:56:42.968966    4539 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0806 00:56:42.970236    4539 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:56:42.973627    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:56:43.030692    4539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:56:43.037781    4539 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000 for IP: 10.0.2.15
	I0806 00:56:43.037793    4539 certs.go:194] generating shared ca certs ...
	I0806 00:56:43.037804    4539 certs.go:226] acquiring lock for ca certs: {Name:mkb2ca998ea1a45f9f580d4d76a58064c889c60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.037990    4539 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key
	I0806 00:56:43.038025    4539 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key
	I0806 00:56:43.038030    4539 certs.go:256] generating profile certs ...
	I0806 00:56:43.038093    4539 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.key
	I0806 00:56:43.038109    4539 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156
	I0806 00:56:43.038119    4539 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0806 00:56:43.156257    4539 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 ...
	I0806 00:56:43.156270    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156: {Name:mk0f3c36402afeb1e7009d40760ebfbe8cd2bc95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.156536    4539 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 ...
	I0806 00:56:43.156541    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156: {Name:mk1b77fe9e2f52c52bcf1128eb177bf67d544f40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.156666    4539 certs.go:381] copying /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 -> /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt
	I0806 00:56:43.156787    4539 certs.go:385] copying /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 -> /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key
	I0806 00:56:43.156911    4539 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/proxy-client.key
	I0806 00:56:43.157038    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem (1338 bytes)
	W0806 00:56:43.157059    4539 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455_empty.pem, impossibly tiny 0 bytes
	I0806 00:56:43.157063    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:56:43.157085    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:56:43.157102    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:56:43.157119    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/certs/key.pem (1675 bytes)
	I0806 00:56:43.157159    4539 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem (1708 bytes)
	I0806 00:56:43.157471    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:56:43.164446    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:56:43.171454    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:56:43.178633    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:56:43.185512    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0806 00:56:43.192239    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 00:56:43.199675    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:56:43.207111    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0806 00:56:43.214109    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/certs/1455.pem --> /usr/share/ca-certificates/1455.pem (1338 bytes)
	I0806 00:56:43.220514    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/ssl/certs/14552.pem --> /usr/share/ca-certificates/14552.pem (1708 bytes)
	I0806 00:56:43.227669    4539 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:56:43.235168    4539 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:56:43.240485    4539 ssh_runner.go:195] Run: openssl version
	I0806 00:56:43.242508    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455.pem && ln -fs /usr/share/ca-certificates/1455.pem /etc/ssl/certs/1455.pem"
	I0806 00:56:43.245442    4539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455.pem
	I0806 00:56:43.246824    4539 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:12 /usr/share/ca-certificates/1455.pem
	I0806 00:56:43.246848    4539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455.pem
	I0806 00:56:43.248772    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1455.pem /etc/ssl/certs/51391683.0"
	I0806 00:56:43.252102    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14552.pem && ln -fs /usr/share/ca-certificates/14552.pem /etc/ssl/certs/14552.pem"
	I0806 00:56:43.255365    4539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14552.pem
	I0806 00:56:43.256833    4539 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:12 /usr/share/ca-certificates/14552.pem
	I0806 00:56:43.256855    4539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14552.pem
	I0806 00:56:43.258786    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14552.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:56:43.261904    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:56:43.264909    4539 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:56:43.266492    4539 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:56:43.266516    4539 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:56:43.268364    4539 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:56:43.271642    4539 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:56:43.273125    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:56:43.275086    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:56:43.276933    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:56:43.278879    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:56:43.280709    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:56:43.282517    4539 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:56:43.284392    4539 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50486 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0806 00:56:43.284460    4539 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:56:43.295241    4539 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:56:43.298303    4539 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 00:56:43.298309    4539 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 00:56:43.298334    4539 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 00:56:43.301098    4539 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:56:43.301394    4539 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-180000" does not appear in /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:56:43.301493    4539 kubeconfig.go:62] /Users/jenkins/minikube-integration/19370-965/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-180000" cluster setting kubeconfig missing "stopped-upgrade-180000" context setting]
	I0806 00:56:43.301695    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:56:43.302090    4539 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040a7f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:56:43.302396    4539 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 00:56:43.305035    4539 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-180000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0806 00:56:43.305041    4539 kubeadm.go:1160] stopping kube-system containers ...
	I0806 00:56:43.305078    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:56:43.315281    4539 docker.go:483] Stopping containers: [9418470fa8b3 b9546762696e 5082f389d196 29ee1941e223 974c0bca9922 729430c6b14e f2620bcfc6ae 2d13495e1513]
	I0806 00:56:43.315361    4539 ssh_runner.go:195] Run: docker stop 9418470fa8b3 b9546762696e 5082f389d196 29ee1941e223 974c0bca9922 729430c6b14e f2620bcfc6ae 2d13495e1513
	I0806 00:56:43.326327    4539 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 00:56:43.331649    4539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:56:43.334899    4539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:56:43.334904    4539 kubeadm.go:157] found existing configuration files:
	
	I0806 00:56:43.334925    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0806 00:56:43.337537    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:56:43.337566    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:56:43.340278    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0806 00:56:43.343472    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:56:43.343495    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:56:43.346549    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0806 00:56:43.349190    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:56:43.349217    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:56:43.352304    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0806 00:56:43.355355    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:56:43.355377    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:56:43.358102    4539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:56:43.360755    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.382914    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.781533    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.888298    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.913319    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:56:43.934157    4539 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:56:43.934236    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:56:44.435242    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:56:44.936288    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:56:44.941460    4539 api_server.go:72] duration metric: took 1.007312208s to wait for apiserver process to appear ...
	I0806 00:56:44.941467    4539 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:56:44.941475    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:49.943600    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:49.943638    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:54.944040    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:54.944117    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:56:59.944993    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:56:59.945066    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:04.946139    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:04.946231    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:09.947759    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:09.947839    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:14.949649    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:14.949718    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:19.952216    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:19.952246    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:24.954471    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:24.954550    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:29.954891    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:29.954929    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:34.957196    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:34.957234    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:39.959399    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:39.959446    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:44.960870    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:44.961200    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:44.995097    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:57:44.995236    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:45.015447    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:57:45.015550    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:45.029538    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.029552    4539 logs.go:278] No container was found matching "coredns"
	I0806 00:57:45.029612    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:45.041391    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:57:45.041457    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:45.052037    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.052046    4539 logs.go:278] No container was found matching "kube-proxy"
	I0806 00:57:45.052102    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:45.068884    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:57:45.068950    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:45.081002    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.081013    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:45.081063    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:45.091772    4539 logs.go:276] 0 containers: []
	W0806 00:57:45.091787    4539 logs.go:278] No container was found matching "storage-provisioner"
	I0806 00:57:45.091793    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:57:45.091799    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:57:45.105980    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:57:45.105993    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:57:45.121934    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:57:45.121947    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:57:45.136863    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:57:45.136876    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:57:45.159704    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:57:45.159714    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:57:45.174825    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:57:45.174837    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:57:45.192207    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:45.192218    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:45.196693    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:45.196698    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:45.307988    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:57:45.308006    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:45.319259    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:57:45.319271    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:57:45.337068    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:45.337080    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:45.359982    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:45.359993    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:45.388239    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:57:45.388251    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:57:47.904729    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:57:52.907070    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:57:52.907360    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:57:52.936512    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:57:52.936635    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:57:52.953790    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:57:52.953862    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:57:52.966423    4539 logs.go:276] 0 containers: []
	W0806 00:57:52.966442    4539 logs.go:278] No container was found matching "coredns"
	I0806 00:57:52.966515    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:57:52.977947    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:57:52.978011    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:57:52.987797    4539 logs.go:276] 0 containers: []
	W0806 00:57:52.987806    4539 logs.go:278] No container was found matching "kube-proxy"
	I0806 00:57:52.987852    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:57:52.998296    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:57:52.998354    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:57:53.031169    4539 logs.go:276] 0 containers: []
	W0806 00:57:53.031184    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:57:53.031236    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:57:53.044597    4539 logs.go:276] 0 containers: []
	W0806 00:57:53.044607    4539 logs.go:278] No container was found matching "storage-provisioner"
	I0806 00:57:53.044612    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:57:53.044618    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:57:53.056342    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:57:53.056351    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:57:53.061050    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:57:53.061058    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:57:53.075936    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:57:53.075951    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:57:53.091226    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:57:53.091236    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:57:53.108307    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:57:53.108319    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:57:53.128686    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:57:53.128697    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:57:53.152645    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:57:53.152656    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:57:53.175831    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:57:53.175841    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:57:53.203452    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:57:53.203463    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:57:53.246475    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:57:53.246485    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:57:53.259668    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:57:53.259682    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:57:53.273383    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:57:53.273396    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:57:55.789592    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:00.791812    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:00.791990    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:00.812240    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:00.812340    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:00.827719    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:00.827792    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:00.840625    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:00.840701    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:00.851294    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:00.851357    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:00.861844    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:00.861927    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:00.873020    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:00.873090    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:00.883171    4539 logs.go:276] 0 containers: []
	W0806 00:58:00.883182    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:00.883234    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:00.893344    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:00.893363    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:00.893369    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:00.917770    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:00.917781    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:00.921590    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:00.921597    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:00.962130    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:00.962141    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:00.975898    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:00.975908    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:00.988085    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:00.988096    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:00.999390    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:00.999402    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:01.011225    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:01.011236    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:01.039714    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:01.039722    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:01.054777    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:01.054788    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:01.078281    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:01.078306    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:01.095244    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:01.095254    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:01.109694    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:01.109706    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:01.122655    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:01.122669    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:01.134231    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:01.134243    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:01.149124    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:01.149141    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:03.669299    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:08.671532    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:08.671650    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:08.685491    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:08.685565    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:08.696224    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:08.696285    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:08.707647    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:08.707706    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:08.718443    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:08.718515    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:08.729264    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:08.729335    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:08.739889    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:08.739956    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:08.750856    4539 logs.go:276] 0 containers: []
	W0806 00:58:08.750868    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:08.750922    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:08.761472    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:08.761490    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:08.761495    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:08.778293    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:08.778303    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:08.783030    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:08.783038    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:08.793758    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:08.793770    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:08.816932    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:08.816949    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:08.833831    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:08.833843    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:08.845102    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:08.845112    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:08.869693    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:08.869704    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:08.897216    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:08.897230    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:08.912846    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:08.912857    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:08.927035    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:08.927045    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:08.938568    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:08.938578    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:08.951359    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:08.951370    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:08.965626    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:08.965638    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:08.984806    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:08.984818    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:08.997019    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:08.997031    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:11.536308    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:16.538602    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:16.539037    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:16.580402    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:16.580552    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:16.602446    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:16.602536    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:16.617602    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:16.617684    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:16.629938    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:16.630020    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:16.640706    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:16.640771    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:16.651516    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:16.651587    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:16.661721    4539 logs.go:276] 0 containers: []
	W0806 00:58:16.661732    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:16.661788    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:16.672503    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:16.672523    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:16.672529    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:16.687114    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:16.687127    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:16.702103    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:16.702118    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:16.722057    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:16.722066    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:16.739873    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:16.739882    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:16.768377    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:16.768385    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:16.791109    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:16.791124    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:16.802621    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:16.802633    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:16.806890    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:16.806899    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:16.841838    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:16.841854    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:16.856010    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:16.856018    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:16.867199    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:16.867211    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:16.878197    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:16.878209    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:16.902101    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:16.902107    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:16.915037    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:16.915048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:16.926124    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:16.926134    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:19.443348    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:24.443782    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:24.443921    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:24.461571    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:24.461648    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:24.472125    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:24.472191    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:24.482487    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:24.482580    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:24.492757    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:24.492827    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:24.503057    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:24.503134    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:24.513912    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:24.513981    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:24.529696    4539 logs.go:276] 0 containers: []
	W0806 00:58:24.529710    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:24.529770    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:24.540091    4539 logs.go:276] 1 containers: [cc8735fa11c6]
	I0806 00:58:24.540112    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:24.540117    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:24.569051    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:24.569064    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:24.602347    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:24.602367    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:24.623290    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:24.623300    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:24.680410    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:24.680424    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:24.692134    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:24.692144    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:24.722248    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:24.722262    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:24.735906    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:24.735916    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:24.751204    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:24.751217    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:24.762854    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:24.762865    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:24.776392    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:24.776401    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:24.800427    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:24.800437    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:24.821643    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:24.821657    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:24.834341    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:24.834355    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:24.851898    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:24.851909    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:24.868446    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:24.868455    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:27.375087    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:32.377456    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:32.377590    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:32.391962    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:32.392047    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:32.404346    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:32.404421    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:32.416607    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:32.416679    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:32.427625    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:32.427703    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:32.438565    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:32.438631    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:32.449328    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:32.449390    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:32.460108    4539 logs.go:276] 0 containers: []
	W0806 00:58:32.460123    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:32.460182    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:32.470796    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:32.470814    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:32.470819    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:32.499390    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:32.499404    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:32.516847    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:32.516856    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:32.534285    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:32.534295    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:32.538396    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:32.538402    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:32.572709    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:32.572720    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:32.590057    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:32.590068    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:32.604457    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:32.604467    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:32.616902    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:32.616914    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:32.641322    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:32.641337    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:32.653393    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:32.653406    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:32.676874    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:32.676883    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:32.691331    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:32.691342    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:32.703971    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:32.703982    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:32.721686    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:32.721701    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:32.735877    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:32.735890    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:32.747235    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:32.747245    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:35.260523    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:40.263129    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:40.263263    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:40.278503    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:40.278586    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:40.290724    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:40.290787    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:40.301267    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:40.301328    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:40.312001    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:40.312073    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:40.322668    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:40.322744    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:40.333695    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:40.333763    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:40.345092    4539 logs.go:276] 0 containers: []
	W0806 00:58:40.345105    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:40.345161    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:40.355712    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:40.355732    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:40.355738    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:40.368702    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:40.368712    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:40.379661    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:40.379677    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:40.402999    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:40.403010    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:40.415769    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:40.415780    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:40.420585    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:40.420594    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:40.439250    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:40.439260    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:40.462174    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:40.462187    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:40.474300    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:40.474311    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:40.501881    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:40.501891    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:40.516258    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:40.516270    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:40.533963    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:40.533972    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:40.545725    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:40.545734    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:40.559534    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:40.559545    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:40.579148    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:40.579160    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:40.603136    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:40.603144    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:40.638044    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:40.638056    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:43.152872    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:48.155499    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:48.155606    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:48.169738    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:48.169820    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:48.180886    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:48.180963    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:48.191360    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:48.191432    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:48.202184    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:48.202256    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:48.212895    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:48.212964    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:48.223660    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:48.223722    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:48.233674    4539 logs.go:276] 0 containers: []
	W0806 00:58:48.233685    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:48.233742    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:48.244013    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:48.244029    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:48.244034    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:48.259345    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:48.259355    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:48.270941    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:48.270951    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:48.283568    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:48.283580    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:48.297141    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:48.297149    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:48.315206    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:48.315218    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:48.334159    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:48.334169    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:48.361270    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:48.361278    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:48.384207    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:48.384217    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:48.401449    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:48.401460    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:48.417872    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:48.417887    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:48.422099    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:48.422108    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:48.457371    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:48.457382    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:48.472430    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:48.472441    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:48.483765    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:48.483775    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:48.499072    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:48.499087    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:48.517075    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:48.517085    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:51.044340    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:58:56.046815    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:58:56.047100    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:58:56.068194    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:58:56.068287    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:58:56.083080    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:58:56.083151    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:58:56.095540    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:58:56.095605    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:58:56.106651    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:58:56.106729    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:58:56.123890    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:58:56.123960    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:58:56.134882    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:58:56.134955    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:58:56.145166    4539 logs.go:276] 0 containers: []
	W0806 00:58:56.145178    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:58:56.145238    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:58:56.156101    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:58:56.156118    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:58:56.156124    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:58:56.167571    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:58:56.167582    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:58:56.186704    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:58:56.186715    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:58:56.190898    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:58:56.190908    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:58:56.203638    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:58:56.203650    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:58:56.217407    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:58:56.217417    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:58:56.240382    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:58:56.240394    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:58:56.261435    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:58:56.261447    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:58:56.291042    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:58:56.291060    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:58:56.318633    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:58:56.318644    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:58:56.355104    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:58:56.355116    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:58:56.369749    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:58:56.369761    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:58:56.381330    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:58:56.381340    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:58:56.398883    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:58:56.398899    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:58:56.413462    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:58:56.413477    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:58:56.429033    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:58:56.429048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:58:56.442964    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:58:56.442976    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:58:58.957372    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:03.959608    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:03.959698    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:03.970643    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:03.970707    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:03.981416    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:03.981488    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:03.991772    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:03.991834    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:04.002384    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:04.002446    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:04.012791    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:04.012850    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:04.023664    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:04.023730    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:04.034393    4539 logs.go:276] 0 containers: []
	W0806 00:59:04.034406    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:04.034465    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:04.044540    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:04.044557    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:04.044563    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:04.080847    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:04.080860    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:04.108246    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:04.108257    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:04.126714    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:04.126724    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:04.144629    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:04.144640    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:04.155839    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:04.155852    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:04.167858    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:04.167874    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:04.182662    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:04.182673    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:04.194387    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:04.194396    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:04.211986    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:04.211996    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:04.241750    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:04.241761    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:04.246226    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:04.246234    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:04.261901    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:04.261911    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:04.278145    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:04.278157    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:04.292407    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:04.292417    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:04.303900    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:04.303912    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:04.315510    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:04.315521    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:06.843767    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:11.846448    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:11.846580    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:11.858633    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:11.858707    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:11.871817    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:11.871888    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:11.882175    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:11.882235    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:11.894285    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:11.894354    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:11.905467    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:11.905527    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:11.916113    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:11.916175    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:11.925795    4539 logs.go:276] 0 containers: []
	W0806 00:59:11.925810    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:11.925861    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:11.936278    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:11.936299    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:11.936305    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:11.940519    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:11.940529    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:11.957738    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:11.957748    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:11.968865    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:11.968876    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:11.980414    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:11.980424    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:11.994752    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:11.994762    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:12.005919    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:12.005933    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:12.017139    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:12.017154    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:12.042131    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:12.042138    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:12.070953    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:12.070964    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:12.087788    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:12.087799    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:12.101601    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:12.101612    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:12.116941    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:12.116953    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:12.134791    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:12.134800    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:12.173719    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:12.173730    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:12.188462    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:12.188474    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:12.212071    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:12.212080    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:14.728519    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:19.730954    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:19.731161    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:19.753059    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:19.753149    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:19.771574    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:19.771658    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:19.783348    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:19.783411    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:19.793919    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:19.793988    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:19.804079    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:19.804136    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:19.814457    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:19.814529    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:19.824785    4539 logs.go:276] 0 containers: []
	W0806 00:59:19.824798    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:19.824863    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:19.835967    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:19.835986    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:19.835991    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:19.850271    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:19.850284    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:19.874935    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:19.874942    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:19.879493    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:19.879499    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:19.914315    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:19.914328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:19.928625    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:19.928635    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:19.941312    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:19.941328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:19.952769    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:19.952779    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:19.966656    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:19.966666    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:19.989939    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:19.989950    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:20.001335    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:20.001349    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:20.012863    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:20.012875    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:20.030986    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:20.031000    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:20.049097    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:20.049110    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:20.078574    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:20.078583    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:20.102024    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:20.102035    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:20.119249    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:20.119262    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:22.640643    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:27.642926    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:27.643287    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:27.675108    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:27.675239    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:27.693940    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:27.694041    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:27.708700    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:27.708780    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:27.720898    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:27.720971    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:27.734158    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:27.734222    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:27.744904    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:27.744979    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:27.755370    4539 logs.go:276] 0 containers: []
	W0806 00:59:27.755382    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:27.755438    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:27.766364    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:27.766382    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:27.766389    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:27.801720    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:27.801735    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:27.816248    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:27.816261    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:27.831439    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:27.831450    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:27.843522    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:27.843532    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:27.873261    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:27.873273    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:27.886595    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:27.886609    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:27.901807    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:27.901817    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:27.913282    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:27.913292    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:27.924818    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:27.924830    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:27.948459    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:27.948469    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:27.952866    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:27.952874    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:27.966919    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:27.966933    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:27.982045    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:27.982057    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:27.999409    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:27.999420    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:28.023036    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:28.023048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:28.040835    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:28.040852    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:30.554778    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:35.557396    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:35.557629    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:35.582923    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:35.583045    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:35.599308    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:35.599385    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:35.612262    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:35.612332    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:35.625646    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:35.625713    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:35.636338    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:35.636431    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:35.647123    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:35.647197    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:35.658085    4539 logs.go:276] 0 containers: []
	W0806 00:59:35.658095    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:35.658148    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:35.668456    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:35.668474    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:35.668480    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:35.672886    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:35.672893    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:35.707783    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:35.707794    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:35.719085    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:35.719097    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:35.737172    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:35.737182    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:35.761233    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:35.761242    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:35.790790    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:35.790801    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:35.803435    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:35.803448    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:35.817411    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:35.817424    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:35.828755    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:35.828765    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:35.851897    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:35.851907    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:35.865020    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:35.865031    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:35.878984    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:35.878995    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:35.894181    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:35.894195    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:35.920471    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:35.920483    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:35.934996    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:35.935007    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:35.946404    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:35.946415    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:38.460170    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:43.462474    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:43.462693    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:43.478932    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:43.479013    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:43.492065    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:43.492144    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:43.503086    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:43.503154    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:43.514124    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:43.514196    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:43.524389    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:43.524456    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:43.538394    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:43.538463    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:43.553279    4539 logs.go:276] 0 containers: []
	W0806 00:59:43.553293    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:43.553350    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:43.563545    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:43.563562    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:43.563567    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:43.592915    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:43.592925    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:43.610089    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:43.610102    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:43.620996    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:43.621009    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:43.656811    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:43.656824    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:43.671142    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:43.671155    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:43.681912    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:43.681924    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:43.693494    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:43.693504    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:43.704542    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:43.704553    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:43.729344    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:43.729355    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:43.745192    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:43.745202    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:43.770683    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:43.770694    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:43.782856    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:43.782866    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:43.788044    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:43.788055    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:43.801334    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:43.801344    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:43.816844    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:43.816858    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:43.833672    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:43.833683    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:46.352836    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:51.355154    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:51.355344    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:51.376205    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:51.376301    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:51.392054    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:51.392129    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:51.405684    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:51.405756    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:51.423739    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:51.423810    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:51.434311    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:51.434378    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:51.444993    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:51.445058    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:51.456309    4539 logs.go:276] 0 containers: []
	W0806 00:59:51.456322    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:51.456380    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:51.466805    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:51.466825    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:51.466831    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:51.471616    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:51.471625    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:51.486391    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:51.486401    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:51.498196    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:51.498207    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:51.521662    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:51.521672    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:51.534783    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:51.534794    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:51.564889    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:51.564898    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:51.578811    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:51.578820    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:51.602336    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:51.602348    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:51.614399    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:51.614413    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:51.632699    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:51.632710    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:51.668309    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:51.668322    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:51.681692    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:51.681705    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:51.694283    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:51.694294    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 00:59:51.712176    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:51.712187    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:51.723805    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:51.723816    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:51.738162    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:51.738175    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:54.258262    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 00:59:59.260547    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 00:59:59.260700    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 00:59:59.271943    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 00:59:59.272014    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 00:59:59.282426    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 00:59:59.282491    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 00:59:59.292937    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 00:59:59.293008    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 00:59:59.303792    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 00:59:59.303864    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 00:59:59.314846    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 00:59:59.314915    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 00:59:59.326177    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 00:59:59.326240    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 00:59:59.336681    4539 logs.go:276] 0 containers: []
	W0806 00:59:59.336693    4539 logs.go:278] No container was found matching "kindnet"
	I0806 00:59:59.336743    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 00:59:59.347392    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 00:59:59.347409    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 00:59:59.347417    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:59:59.351994    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:59:59.352003    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 00:59:59.387033    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 00:59:59.387044    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 00:59:59.411138    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 00:59:59.411150    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 00:59:59.428338    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 00:59:59.428351    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 00:59:59.440429    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 00:59:59.440440    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 00:59:59.451677    4539 logs.go:123] Gathering logs for Docker ...
	I0806 00:59:59.451688    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 00:59:59.476289    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 00:59:59.476296    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 00:59:59.490588    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 00:59:59.490601    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 00:59:59.506117    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 00:59:59.506133    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 00:59:59.518030    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 00:59:59.518040    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 00:59:59.535559    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 00:59:59.535569    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 00:59:59.547003    4539 logs.go:123] Gathering logs for container status ...
	I0806 00:59:59.547013    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 00:59:59.558957    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 00:59:59.558967    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:59:59.588769    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 00:59:59.588777    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 00:59:59.602554    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 00:59:59.602564    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 00:59:59.614451    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 00:59:59.614460    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:02.134598    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:07.136802    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:07.136991    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:07.159406    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:07.159529    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:07.175506    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:07.175596    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:07.192377    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:07.192448    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:07.202807    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:07.202883    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:07.214274    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:07.214341    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:07.225255    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:07.225319    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:07.235819    4539 logs.go:276] 0 containers: []
	W0806 01:00:07.235830    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:07.235892    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:07.246746    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:07.246764    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:07.246770    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:07.276064    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:07.276076    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:07.299528    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:07.299544    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:07.312181    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:07.312191    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:07.336471    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:07.336486    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:07.371038    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:07.371051    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:07.384317    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:07.384328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:07.398615    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:07.398628    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:07.413795    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:07.413808    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:07.430272    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:07.430285    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:07.446737    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:07.446749    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:07.464993    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:07.465004    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:07.476339    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:07.476351    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:07.480733    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:07.480741    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:07.495819    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:07.495834    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:07.507533    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:07.507550    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:07.525820    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:07.525832    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:10.040148    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:15.042561    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:15.042987    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:15.084440    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:15.084581    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:15.103881    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:15.103984    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:15.117936    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:15.118012    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:15.129915    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:15.129994    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:15.141177    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:15.141242    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:15.152377    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:15.152451    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:15.163949    4539 logs.go:276] 0 containers: []
	W0806 01:00:15.163962    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:15.164034    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:15.175147    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:15.175165    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:15.175171    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:15.211667    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:15.211677    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:15.224805    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:15.224817    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:15.236085    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:15.236097    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:15.251290    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:15.251303    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:15.263465    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:15.263479    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:15.281402    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:15.281416    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:15.306038    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:15.306048    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:15.323271    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:15.323285    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:15.340097    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:15.340111    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:15.357388    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:15.357401    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:15.369144    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:15.369160    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:15.395307    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:15.395321    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:15.407189    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:15.407201    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:15.421339    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:15.421353    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:15.450544    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:15.450552    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:15.455303    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:15.455312    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:17.972443    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:22.974839    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:22.974919    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:22.987929    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:22.988002    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:22.998380    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:22.998448    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:23.012280    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:23.012343    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:23.022686    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:23.022753    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:23.033404    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:23.033469    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:23.044636    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:23.044706    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:23.054998    4539 logs.go:276] 0 containers: []
	W0806 01:00:23.055013    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:23.055068    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:23.065786    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:23.065808    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:23.065814    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:23.078898    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:23.078909    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:23.115803    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:23.115814    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:23.133391    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:23.133400    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:23.158157    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:23.158165    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:23.171108    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:23.171121    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:23.184838    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:23.184848    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:23.196370    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:23.196382    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:23.220402    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:23.220415    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:23.235479    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:23.235491    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:23.253318    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:23.253328    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:23.266196    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:23.266206    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:23.295376    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:23.295385    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:23.299417    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:23.299423    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:23.312785    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:23.312796    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:23.332022    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:23.332032    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:23.343946    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:23.343956    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:25.857180    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:30.859537    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:30.859860    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:30.898385    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:30.898525    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:30.917778    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:30.917863    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:30.933254    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:30.933315    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:30.944676    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:30.944746    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:30.955887    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:30.955960    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:30.966925    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:30.966987    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:30.978014    4539 logs.go:276] 0 containers: []
	W0806 01:00:30.978031    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:30.978089    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:30.989583    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:30.989606    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:30.989612    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:31.018160    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:31.018168    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:31.032130    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:31.032143    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:31.044537    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:31.044547    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:31.059713    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:31.059724    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:31.078999    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:31.079010    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:31.095320    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:31.095331    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:31.119851    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:31.119859    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:31.137598    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:31.137608    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:31.172863    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:31.172875    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:31.185923    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:31.185934    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:31.198110    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:31.198120    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:31.222288    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:31.222301    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:31.233829    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:31.233842    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:31.237952    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:31.237962    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:31.252606    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:31.252617    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:31.268181    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:31.268192    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:33.781961    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:38.784194    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:38.784334    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:00:38.797626    4539 logs.go:276] 2 containers: [05773e88ef12 4b5adefd37e4]
	I0806 01:00:38.797698    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:00:38.808450    4539 logs.go:276] 2 containers: [598b57d62033 9418470fa8b3]
	I0806 01:00:38.808519    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:00:38.819544    4539 logs.go:276] 1 containers: [96cc7574e18d]
	I0806 01:00:38.819609    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:00:38.831954    4539 logs.go:276] 2 containers: [8aa5decddf74 5082f389d196]
	I0806 01:00:38.832021    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:00:38.842647    4539 logs.go:276] 1 containers: [9c5b7c732760]
	I0806 01:00:38.842712    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:00:38.853666    4539 logs.go:276] 2 containers: [9325ba01036a e512bcc15a6b]
	I0806 01:00:38.853731    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:00:38.868324    4539 logs.go:276] 0 containers: []
	W0806 01:00:38.868337    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:00:38.868394    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:00:38.879170    4539 logs.go:276] 2 containers: [374e0e1dd230 cc8735fa11c6]
	I0806 01:00:38.879187    4539 logs.go:123] Gathering logs for kube-apiserver [4b5adefd37e4] ...
	I0806 01:00:38.879195    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b5adefd37e4"
	I0806 01:00:38.891783    4539 logs.go:123] Gathering logs for kube-proxy [9c5b7c732760] ...
	I0806 01:00:38.891794    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c5b7c732760"
	I0806 01:00:38.906196    4539 logs.go:123] Gathering logs for storage-provisioner [cc8735fa11c6] ...
	I0806 01:00:38.906210    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc8735fa11c6"
	I0806 01:00:38.917777    4539 logs.go:123] Gathering logs for kube-scheduler [5082f389d196] ...
	I0806 01:00:38.917790    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5082f389d196"
	I0806 01:00:38.932456    4539 logs.go:123] Gathering logs for kube-controller-manager [9325ba01036a] ...
	I0806 01:00:38.932474    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9325ba01036a"
	I0806 01:00:38.949896    4539 logs.go:123] Gathering logs for storage-provisioner [374e0e1dd230] ...
	I0806 01:00:38.949908    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 374e0e1dd230"
	I0806 01:00:38.961507    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:00:38.961519    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:00:38.993004    4539 logs.go:123] Gathering logs for kube-apiserver [05773e88ef12] ...
	I0806 01:00:38.993021    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05773e88ef12"
	I0806 01:00:39.007779    4539 logs.go:123] Gathering logs for kube-controller-manager [e512bcc15a6b] ...
	I0806 01:00:39.007791    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e512bcc15a6b"
	I0806 01:00:39.025275    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:00:39.025285    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:00:39.037150    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:00:39.037161    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:00:39.041363    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:00:39.041369    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:00:39.075852    4539 logs.go:123] Gathering logs for etcd [598b57d62033] ...
	I0806 01:00:39.075864    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 598b57d62033"
	I0806 01:00:39.090309    4539 logs.go:123] Gathering logs for etcd [9418470fa8b3] ...
	I0806 01:00:39.090320    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9418470fa8b3"
	I0806 01:00:39.105861    4539 logs.go:123] Gathering logs for coredns [96cc7574e18d] ...
	I0806 01:00:39.105871    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96cc7574e18d"
	I0806 01:00:39.117669    4539 logs.go:123] Gathering logs for kube-scheduler [8aa5decddf74] ...
	I0806 01:00:39.117679    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8aa5decddf74"
	I0806 01:00:39.141213    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:00:39.141225    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:00:41.667128    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:46.669423    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:46.669488    4539 kubeadm.go:597] duration metric: took 4m3.372749s to restartPrimaryControlPlane
	W0806 01:00:46.669540    4539 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0806 01:00:46.669566    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0806 01:00:47.598904    4539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 01:00:47.604131    4539 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 01:00:47.607043    4539 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 01:00:47.609637    4539 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 01:00:47.609645    4539 kubeadm.go:157] found existing configuration files:
	
	I0806 01:00:47.609669    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf
	I0806 01:00:47.612135    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 01:00:47.612160    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 01:00:47.614900    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf
	I0806 01:00:47.617352    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 01:00:47.617371    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 01:00:47.620760    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf
	I0806 01:00:47.624270    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 01:00:47.624296    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 01:00:47.627106    4539 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf
	I0806 01:00:47.629807    4539 kubeadm.go:163] "https://control-plane.minikube.internal:50486" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50486 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 01:00:47.629831    4539 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 01:00:47.633113    4539 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 01:00:47.650078    4539 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0806 01:00:47.650106    4539 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 01:00:47.700320    4539 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 01:00:47.700373    4539 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 01:00:47.700415    4539 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 01:00:47.749898    4539 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 01:00:47.755113    4539 out.go:204]   - Generating certificates and keys ...
	I0806 01:00:47.755151    4539 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 01:00:47.755189    4539 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 01:00:47.755240    4539 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 01:00:47.755273    4539 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 01:00:47.755325    4539 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 01:00:47.755355    4539 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 01:00:47.755382    4539 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 01:00:47.755428    4539 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 01:00:47.755470    4539 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 01:00:47.755511    4539 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 01:00:47.755532    4539 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 01:00:47.755565    4539 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 01:00:47.848780    4539 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 01:00:47.961286    4539 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 01:00:48.027964    4539 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 01:00:48.196225    4539 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 01:00:48.227929    4539 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 01:00:48.228277    4539 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 01:00:48.228300    4539 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 01:00:48.298803    4539 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 01:00:48.303213    4539 out.go:204]   - Booting up control plane ...
	I0806 01:00:48.303259    4539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 01:00:48.303306    4539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 01:00:48.303342    4539 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 01:00:48.303385    4539 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 01:00:48.303473    4539 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 01:00:53.309380    4539 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.007648 seconds
	I0806 01:00:53.309433    4539 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 01:00:53.313795    4539 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 01:00:53.836946    4539 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 01:00:53.837091    4539 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-180000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 01:00:54.340659    4539 kubeadm.go:310] [bootstrap-token] Using token: irs0sz.hqxmy2t5x8gei7l0
	I0806 01:00:54.346548    4539 out.go:204]   - Configuring RBAC rules ...
	I0806 01:00:54.346610    4539 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 01:00:54.346663    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 01:00:54.348419    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 01:00:54.354097    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 01:00:54.354968    4539 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 01:00:54.355715    4539 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 01:00:54.358826    4539 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 01:00:54.523972    4539 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 01:00:54.744808    4539 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 01:00:54.745305    4539 kubeadm.go:310] 
	I0806 01:00:54.745337    4539 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 01:00:54.745340    4539 kubeadm.go:310] 
	I0806 01:00:54.745384    4539 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 01:00:54.745390    4539 kubeadm.go:310] 
	I0806 01:00:54.745404    4539 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 01:00:54.745441    4539 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 01:00:54.745469    4539 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 01:00:54.745472    4539 kubeadm.go:310] 
	I0806 01:00:54.745507    4539 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 01:00:54.745513    4539 kubeadm.go:310] 
	I0806 01:00:54.745532    4539 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 01:00:54.745535    4539 kubeadm.go:310] 
	I0806 01:00:54.745579    4539 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 01:00:54.745617    4539 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 01:00:54.745676    4539 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 01:00:54.745681    4539 kubeadm.go:310] 
	I0806 01:00:54.745715    4539 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 01:00:54.745753    4539 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 01:00:54.745756    4539 kubeadm.go:310] 
	I0806 01:00:54.745790    4539 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token irs0sz.hqxmy2t5x8gei7l0 \
	I0806 01:00:54.745855    4539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f \
	I0806 01:00:54.745866    4539 kubeadm.go:310] 	--control-plane 
	I0806 01:00:54.745868    4539 kubeadm.go:310] 
	I0806 01:00:54.745917    4539 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 01:00:54.745920    4539 kubeadm.go:310] 
	I0806 01:00:54.745965    4539 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token irs0sz.hqxmy2t5x8gei7l0 \
	I0806 01:00:54.746019    4539 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:004497139f3dc048a20953509ef68dec08d54d5db6f0d1b10a415219fecf194f 
	I0806 01:00:54.746248    4539 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 01:00:54.746284    4539 cni.go:84] Creating CNI manager for ""
	I0806 01:00:54.746293    4539 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:00:54.750066    4539 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0806 01:00:54.759123    4539 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0806 01:00:54.762371    4539 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0806 01:00:54.767127    4539 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 01:00:54.767173    4539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:00:54.767200    4539 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-180000 minikube.k8s.io/updated_at=2024_08_06T01_00_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=stopped-upgrade-180000 minikube.k8s.io/primary=true
	I0806 01:00:54.813807    4539 kubeadm.go:1113] duration metric: took 46.668708ms to wait for elevateKubeSystemPrivileges
	I0806 01:00:54.813819    4539 ops.go:34] apiserver oom_adj: -16
	I0806 01:00:54.813825    4539 kubeadm.go:394] duration metric: took 4m11.531061208s to StartCluster
	I0806 01:00:54.813835    4539 settings.go:142] acquiring lock: {Name:mk345cecdfb5b849013811e238a7c51cfd047298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:00:54.813930    4539 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:00:54.814356    4539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/kubeconfig: {Name:mk054609795edfdc491af119142ed9d8e6063b99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:00:54.814557    4539 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:00:54.814596    4539 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 01:00:54.814630    4539 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-180000"
	I0806 01:00:54.814639    4539 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-180000"
	I0806 01:00:54.814653    4539 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:00:54.814643    4539 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-180000"
	I0806 01:00:54.814657    4539 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-180000"
	W0806 01:00:54.814663    4539 addons.go:243] addon storage-provisioner should already be in state true
	I0806 01:00:54.814675    4539 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0806 01:00:54.822232    4539 out.go:177] * Verifying Kubernetes components...
	I0806 01:00:54.824993    4539 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 01:00:54.825019    4539 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:00:54.825840    4539 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1040a7f90), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 01:00:54.825993    4539 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-180000"
	W0806 01:00:54.826000    4539 addons.go:243] addon default-storageclass should already be in state true
	I0806 01:00:54.826011    4539 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0806 01:00:54.826615    4539 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 01:00:54.826621    4539 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 01:00:54.826627    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 01:00:54.829213    4539 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 01:00:54.829220    4539 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 01:00:54.829226    4539 sshutil.go:53] new ssh client: &{IP:localhost Port:50451 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0806 01:00:54.896524    4539 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 01:00:54.901839    4539 api_server.go:52] waiting for apiserver process to appear ...
	I0806 01:00:54.901880    4539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 01:00:54.906277    4539 api_server.go:72] duration metric: took 91.7085ms to wait for apiserver process to appear ...
	I0806 01:00:54.906287    4539 api_server.go:88] waiting for apiserver healthz status ...
	I0806 01:00:54.906294    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:00:54.915060    4539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 01:00:54.915405    4539 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 01:00:59.908413    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:00:59.908474    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:04.909232    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:04.909279    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:09.909782    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:09.909802    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:14.910397    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:14.910443    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:19.911262    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:19.911297    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:24.912268    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:24.912287    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0806 01:01:25.280208    4539 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0806 01:01:25.284406    4539 out.go:177] * Enabled addons: storage-provisioner
	I0806 01:01:25.293311    4539 addons.go:510] duration metric: took 30.478929875s for enable addons: enabled=[storage-provisioner]
	I0806 01:01:29.913603    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:29.913630    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:34.915130    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:34.915162    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:39.917271    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:39.917319    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:44.919522    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:44.919546    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:49.921695    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:49.921720    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:01:54.923873    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:01:54.924051    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:01:54.936838    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:01:54.936916    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:01:54.947509    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:01:54.947580    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:01:54.957882    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:01:54.957947    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:01:54.968168    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:01:54.968240    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:01:54.979403    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:01:54.979472    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:01:54.989497    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:01:54.989559    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:01:54.999766    4539 logs.go:276] 0 containers: []
	W0806 01:01:54.999779    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:01:54.999834    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:01:55.013965    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:01:55.013980    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:01:55.013986    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:01:55.027453    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:01:55.027464    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:01:55.051841    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:01:55.051851    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:01:55.063406    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:01:55.063419    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:01:55.094034    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:01:55.094041    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:01:55.098377    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:01:55.098384    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:01:55.135633    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:01:55.135645    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:01:55.159568    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:01:55.159578    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:01:55.171991    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:01:55.172005    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:01:55.184038    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:01:55.184050    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:01:55.199327    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:01:55.199341    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:01:55.211420    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:01:55.211431    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:01:55.222800    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:01:55.222809    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:01:57.749565    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:02.751867    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:02.752127    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:02.777260    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:02.777370    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:02.793927    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:02.794000    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:02.806337    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:02.806412    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:02.817996    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:02.818055    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:02.827996    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:02.828066    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:02.838357    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:02.838415    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:02.848784    4539 logs.go:276] 0 containers: []
	W0806 01:02:02.848798    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:02.848855    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:02.859176    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:02.859190    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:02.859194    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:02.870631    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:02.870643    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:02.894420    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:02.894433    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:02.923753    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:02.923762    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:02.961573    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:02.961586    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:02.973905    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:02.973919    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:02.985099    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:02.985112    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:02.999684    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:02.999694    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:03.011272    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:03.011286    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:03.028263    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:03.028273    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:03.041232    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:03.041245    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:03.046245    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:03.046254    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:02:03.066342    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:03.066355    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:05.582266    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:10.584978    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:10.585399    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:10.628034    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:10.628168    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:10.651628    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:10.651729    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:10.666186    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:10.666272    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:10.678998    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:10.679061    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:10.689719    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:10.689777    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:10.700445    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:10.700512    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:10.711372    4539 logs.go:276] 0 containers: []
	W0806 01:02:10.711390    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:10.711438    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:10.722604    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:10.722621    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:10.722627    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:10.752812    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:10.752821    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:10.756927    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:10.756935    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:10.791333    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:10.791347    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:10.805533    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:10.805545    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:10.826841    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:10.826855    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:10.839227    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:10.839241    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:10.856187    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:10.856200    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:10.867620    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:10.867632    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:10.891477    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:10.891484    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:10.909663    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:10.909677    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:02:10.923814    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:10.923826    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:10.934911    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:10.934922    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:13.451950    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:18.454769    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:18.455161    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:18.492728    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:18.492854    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:18.514285    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:18.514401    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:18.530423    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:18.530494    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:18.542613    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:18.542681    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:18.555618    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:18.555687    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:18.566145    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:18.566212    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:18.576706    4539 logs.go:276] 0 containers: []
	W0806 01:02:18.576716    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:18.576766    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:18.586791    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:18.586806    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:18.586812    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:18.602537    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:18.602549    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:18.615107    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:18.615123    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:18.636092    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:18.636106    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:18.667690    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:18.667700    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:18.703196    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:18.703207    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:02:18.718105    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:18.718118    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:18.730307    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:18.730318    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:18.744099    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:18.744109    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:18.764613    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:18.764624    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:18.776169    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:18.776183    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:18.800144    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:18.800156    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:18.804623    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:18.804632    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:21.317770    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:26.320181    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:26.320396    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:26.331933    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:26.332009    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:26.343166    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:26.343235    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:26.354251    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:26.354319    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:26.364636    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:26.364697    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:26.375286    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:26.375357    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:26.389946    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:26.390019    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:26.399809    4539 logs.go:276] 0 containers: []
	W0806 01:02:26.399820    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:26.399872    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:26.410723    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:26.410737    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:26.410743    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:26.422120    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:26.422130    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:26.453849    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:26.453856    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:26.488827    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:26.488840    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:26.502791    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:26.502805    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:26.514546    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:26.514559    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:26.526486    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:26.526497    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:26.540829    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:26.540841    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:26.558855    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:26.558864    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:26.583783    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:26.583790    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:26.596215    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:26.596224    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:26.600699    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:26.600705    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:02:26.615559    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:26.615574    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:29.129103    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:34.131915    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:34.132300    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:34.171804    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:34.171942    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:34.193838    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:34.193957    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:34.208841    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:34.208919    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:34.221357    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:34.221423    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:34.232420    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:34.232483    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:34.243643    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:34.243710    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:34.253755    4539 logs.go:276] 0 containers: []
	W0806 01:02:34.253767    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:34.253817    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:34.264383    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:34.264397    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:34.264401    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:34.276685    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:34.276700    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:34.301193    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:34.301201    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:34.341861    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:34.341872    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:02:34.358171    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:34.358182    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:34.372404    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:34.372417    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:34.388260    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:34.388272    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:34.403851    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:34.403861    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:34.416181    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:34.416192    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:34.434567    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:34.434581    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:34.445957    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:34.445968    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:34.475853    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:34.475860    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:34.480417    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:34.480422    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:36.997683    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:42.000130    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:42.000601    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:42.038850    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:42.038970    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:42.062106    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:42.062216    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:42.078105    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:42.078182    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:42.091320    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:42.091388    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:42.103174    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:42.103248    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:42.117503    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:42.117568    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:42.128512    4539 logs.go:276] 0 containers: []
	W0806 01:02:42.128525    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:42.128583    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:42.139867    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:42.139882    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:42.139887    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:42.152884    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:42.152894    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:42.171209    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:42.171220    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:42.183869    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:42.183881    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:42.196181    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:42.196195    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:42.211972    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:42.211984    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:42.224636    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:42.224650    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:42.260014    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:42.260029    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:02:42.276182    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:42.276194    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:42.288464    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:42.288477    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:42.304336    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:42.304345    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:42.328766    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:42.328774    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:42.359455    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:42.359461    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:44.866173    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:49.868685    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:49.869073    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:49.910670    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:49.910797    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:49.936854    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:49.936958    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:49.952198    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:49.952264    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:49.964933    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:49.965000    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:49.976880    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:49.976952    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:49.988618    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:49.988683    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:50.000117    4539 logs.go:276] 0 containers: []
	W0806 01:02:50.000130    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:50.000182    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:50.011476    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:50.011492    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:50.011497    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:50.024926    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:50.024936    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:50.056132    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:50.056139    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:50.093911    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:50.093922    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:50.109204    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:50.109217    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:50.125407    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:50.125418    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:50.144404    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:50.144417    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:50.157361    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:50.157375    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:50.180996    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:50.181007    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:50.193109    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:50.193122    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:50.197336    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:50.197344    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:02:50.212385    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:50.212397    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:50.224250    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:50.224263    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:52.738477    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:02:57.740825    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:02:57.741056    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:02:57.763828    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:02:57.763927    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:02:57.779328    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:02:57.779410    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:02:57.792297    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:02:57.792364    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:02:57.803390    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:02:57.803456    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:02:57.813742    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:02:57.813803    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:02:57.824048    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:02:57.824112    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:02:57.834525    4539 logs.go:276] 0 containers: []
	W0806 01:02:57.834536    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:02:57.834597    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:02:57.848420    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:02:57.848435    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:02:57.848441    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:02:57.852737    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:02:57.852746    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:02:57.886264    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:02:57.886276    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:02:57.897703    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:02:57.897718    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:02:57.909122    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:02:57.909136    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:02:57.923805    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:02:57.923815    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:02:57.935298    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:02:57.935309    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:02:57.960778    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:02:57.960784    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:02:57.992384    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:02:57.992394    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:02:58.003966    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:02:58.003980    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:02:58.018211    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:02:58.018221    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:02:58.029711    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:02:58.029725    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:02:58.047626    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:02:58.047636    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:00.563987    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:03:05.564986    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:03:05.565431    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:03:05.605747    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:03:05.605887    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:03:05.628196    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:03:05.628307    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:03:05.647417    4539 logs.go:276] 2 containers: [ee35e15fafe0 964f8ef4b02d]
	I0806 01:03:05.647491    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:03:05.659335    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:03:05.659401    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:03:05.670663    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:03:05.670732    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:03:05.682024    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:03:05.682082    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:03:05.692653    4539 logs.go:276] 0 containers: []
	W0806 01:03:05.692665    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:03:05.692717    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:03:05.703021    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:03:05.703038    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:03:05.703044    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:03:05.733348    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:03:05.733354    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:05.748899    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:03:05.748911    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:03:05.760548    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:03:05.760558    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:03:05.775228    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:03:05.775239    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:03:05.786779    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:03:05.786789    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:03:05.798360    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:03:05.798373    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:03:05.809862    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:03:05.809875    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:03:05.813953    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:03:05.813960    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:03:05.848030    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:03:05.848044    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:03:05.862183    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:03:05.862195    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:03:05.873980    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:03:05.873992    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:03:05.894831    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:03:05.894842    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:03:08.428532    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:03:13.431351    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:03:13.431747    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:03:13.471741    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:03:13.471864    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:03:13.493162    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:03:13.493274    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:03:13.508909    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:03:13.508977    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:03:13.521319    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:03:13.521382    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:03:13.545401    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:03:13.545475    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:03:13.559535    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:03:13.559601    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:03:13.570157    4539 logs.go:276] 0 containers: []
	W0806 01:03:13.570169    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:03:13.570233    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:03:13.581193    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:03:13.581210    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:03:13.581216    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:03:13.593088    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:03:13.593100    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:03:13.616292    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:03:13.616299    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:03:13.628625    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:03:13.628640    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:03:13.667309    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:03:13.667322    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:13.682032    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:03:13.682045    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:03:13.697215    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:03:13.697224    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:03:13.715029    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:03:13.715040    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:03:13.747325    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:03:13.747338    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:03:13.758986    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:03:13.758998    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:03:13.770862    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:03:13.770875    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:03:13.786705    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:03:13.786718    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:03:13.798673    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:03:13.798684    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:03:13.810754    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:03:13.810767    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:03:13.815690    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:03:13.815696    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:03:16.332195    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:03:21.335009    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:03:21.335253    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:03:21.367373    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:03:21.367521    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:03:21.385776    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:03:21.385866    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:03:21.400759    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:03:21.400829    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:03:21.412643    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:03:21.412708    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:03:21.424032    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:03:21.424097    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:03:21.434704    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:03:21.434759    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:03:21.444899    4539 logs.go:276] 0 containers: []
	W0806 01:03:21.444911    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:03:21.444969    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:03:21.455595    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:03:21.455614    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:03:21.455620    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:03:21.467047    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:03:21.467060    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:03:21.478683    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:03:21.478693    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:03:21.490089    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:03:21.490101    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:03:21.505029    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:03:21.505042    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:03:21.521225    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:03:21.521236    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:03:21.550769    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:03:21.550776    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:03:21.592396    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:03:21.592406    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:21.609303    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:03:21.609313    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:03:21.631616    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:03:21.631626    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:03:21.644616    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:03:21.644627    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:03:21.661906    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:03:21.661920    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:03:21.666600    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:03:21.666607    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:03:21.681401    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:03:21.681411    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:03:21.704793    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:03:21.704801    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:03:24.218385    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:03:29.221117    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:03:29.221220    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:03:29.233428    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:03:29.233484    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:03:29.245965    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:03:29.246018    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:03:29.257370    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:03:29.257425    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:03:29.267982    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:03:29.268058    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:03:29.279093    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:03:29.279169    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:03:29.291431    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:03:29.291497    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:03:29.303151    4539 logs.go:276] 0 containers: []
	W0806 01:03:29.303163    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:03:29.303231    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:03:29.315192    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:03:29.315211    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:03:29.315216    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:03:29.328056    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:03:29.328071    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:03:29.340709    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:03:29.340720    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:03:29.356964    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:03:29.356978    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:03:29.375335    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:03:29.375347    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:03:29.407348    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:03:29.407365    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:03:29.412629    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:03:29.412643    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:03:29.451082    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:03:29.451097    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:03:29.467004    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:03:29.467017    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:03:29.479505    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:03:29.479517    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:29.494564    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:03:29.494581    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:03:29.521348    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:03:29.521368    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:03:29.534380    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:03:29.534390    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:03:29.546724    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:03:29.546735    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:03:29.559022    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:03:29.559035    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:03:32.074145    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:03:37.076480    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:03:37.076704    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:03:37.098543    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:03:37.098650    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:03:37.114856    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:03:37.114930    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:03:37.126869    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:03:37.126946    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:03:37.137754    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:03:37.137835    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:03:37.148494    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:03:37.148559    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:03:37.158831    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:03:37.158901    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:03:37.172396    4539 logs.go:276] 0 containers: []
	W0806 01:03:37.172407    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:03:37.172464    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:03:37.186974    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:03:37.186994    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:03:37.186998    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:37.205648    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:03:37.205661    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:03:37.217679    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:03:37.217692    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:03:37.229499    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:03:37.229512    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:03:37.240614    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:03:37.240625    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:03:37.255656    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:03:37.255666    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:03:37.267174    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:03:37.267184    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:03:37.271442    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:03:37.271450    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:03:37.305351    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:03:37.305364    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:03:37.316738    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:03:37.316752    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:03:37.328443    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:03:37.328455    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:03:37.353273    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:03:37.353279    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:03:37.384606    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:03:37.384615    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:03:37.398271    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:03:37.398284    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:03:37.410096    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:03:37.410110    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:03:39.930638    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:03:44.932595    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:03:44.932787    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:03:44.955784    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:03:44.955879    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:03:44.971761    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:03:44.971832    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:03:44.982894    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:03:44.982952    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:03:44.994351    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:03:44.994418    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:03:45.004808    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:03:45.004865    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:03:45.015304    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:03:45.015370    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:03:45.026654    4539 logs.go:276] 0 containers: []
	W0806 01:03:45.026668    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:03:45.026724    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:03:45.036859    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:03:45.036876    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:03:45.036881    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:03:45.067008    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:03:45.067022    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:03:45.071689    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:03:45.071697    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:03:45.083403    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:03:45.083415    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:03:45.095010    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:03:45.095021    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:03:45.120126    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:03:45.120136    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:45.133914    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:03:45.133925    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:03:45.152545    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:03:45.152556    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:03:45.164145    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:03:45.164154    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:03:45.178004    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:03:45.178012    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:03:45.190032    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:03:45.190042    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:03:45.205374    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:03:45.205382    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:03:45.222914    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:03:45.222927    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:03:45.234156    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:03:45.234169    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:03:45.268520    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:03:45.268534    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:03:47.782981    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:03:52.785808    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:03:52.786188    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:03:52.827704    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:03:52.827811    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:03:52.845672    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:03:52.845750    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:03:52.859784    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:03:52.859856    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:03:52.871266    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:03:52.871323    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:03:52.892583    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:03:52.892643    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:03:52.907914    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:03:52.907981    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:03:52.922657    4539 logs.go:276] 0 containers: []
	W0806 01:03:52.922671    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:03:52.922720    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:03:52.933348    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:03:52.933367    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:03:52.933372    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:03:52.937936    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:03:52.937944    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:03:52.973289    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:03:52.973298    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:03:52.985494    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:03:52.985507    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:03:53.003337    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:03:53.003347    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:03:53.015420    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:03:53.015429    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:03:53.028494    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:03:53.028504    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:03:53.060669    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:03:53.060688    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:03:53.074113    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:03:53.074125    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:03:53.086734    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:03:53.086746    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:03:53.099237    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:03:53.099249    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:03:53.114644    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:03:53.114654    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:03:53.131137    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:03:53.131149    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:03:53.144426    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:03:53.144439    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:03:53.167776    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:03:53.167788    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:03:55.694929    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:00.697283    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:00.697766    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:04:00.739067    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:04:00.739194    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:04:00.760341    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:04:00.760452    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:04:00.777802    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:04:00.777868    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:04:00.789878    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:04:00.789947    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:04:00.801360    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:04:00.801429    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:04:00.812622    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:04:00.812692    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:04:00.823439    4539 logs.go:276] 0 containers: []
	W0806 01:04:00.823448    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:04:00.823500    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:04:00.834102    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:04:00.834118    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:04:00.834124    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:04:00.849038    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:04:00.849049    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:04:00.864100    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:04:00.864113    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:04:00.881709    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:04:00.881721    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:04:00.893401    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:04:00.893411    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:04:00.923670    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:04:00.923681    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:04:00.959342    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:04:00.959354    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:04:00.971447    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:04:00.971458    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:04:00.982805    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:04:00.982817    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:04:00.987116    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:04:00.987123    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:04:01.010488    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:04:01.010502    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:04:01.022345    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:04:01.022354    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:04:01.042195    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:04:01.042207    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:04:01.053820    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:04:01.053833    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:04:01.077998    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:04:01.078006    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:04:03.592642    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:08.595378    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:08.595593    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:04:08.617636    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:04:08.617733    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:04:08.632601    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:04:08.632684    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:04:08.648881    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:04:08.648957    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:04:08.659897    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:04:08.659966    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:04:08.671465    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:04:08.671529    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:04:08.681794    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:04:08.681855    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:04:08.695700    4539 logs.go:276] 0 containers: []
	W0806 01:04:08.695712    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:04:08.695767    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:04:08.711239    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:04:08.711257    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:04:08.711262    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:04:08.741868    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:04:08.741880    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:04:08.765620    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:04:08.765629    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:04:08.777190    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:04:08.777205    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:04:08.790277    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:04:08.790293    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:04:08.808529    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:04:08.808539    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:04:08.842252    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:04:08.842263    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:04:08.856792    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:04:08.856805    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:04:08.867985    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:04:08.867996    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:04:08.879738    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:04:08.879754    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:04:08.892134    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:04:08.892144    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:04:08.907275    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:04:08.907285    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:04:08.918424    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:04:08.918433    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:04:08.922653    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:04:08.922659    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:04:08.937542    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:04:08.937552    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:04:11.451348    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:16.454243    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:16.454675    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:04:16.493692    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:04:16.493804    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:04:16.516490    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:04:16.516598    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:04:16.533682    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:04:16.533756    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:04:16.547977    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:04:16.548045    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:04:16.558817    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:04:16.558885    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:04:16.571694    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:04:16.571760    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:04:16.582441    4539 logs.go:276] 0 containers: []
	W0806 01:04:16.582458    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:04:16.582515    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:04:16.593798    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:04:16.593816    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:04:16.593822    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:04:16.608597    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:04:16.608608    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:04:16.620020    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:04:16.620033    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:04:16.631719    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:04:16.631735    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:04:16.664470    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:04:16.664482    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:04:16.676762    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:04:16.676773    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:04:16.700376    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:04:16.700385    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:04:16.714071    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:04:16.714084    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:04:16.725889    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:04:16.725903    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:04:16.744569    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:04:16.744581    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:04:16.781157    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:04:16.781171    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:04:16.796580    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:04:16.796592    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:04:16.808888    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:04:16.808900    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:04:16.823794    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:04:16.823808    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:04:16.835578    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:04:16.835589    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:04:19.340570    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:24.342846    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:24.343274    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:04:24.383159    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:04:24.383275    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:04:24.404945    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:04:24.405053    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:04:24.420565    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:04:24.420631    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:04:24.433118    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:04:24.433188    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:04:24.444218    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:04:24.444273    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:04:24.454896    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:04:24.454959    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:04:24.465054    4539 logs.go:276] 0 containers: []
	W0806 01:04:24.465064    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:04:24.465108    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:04:24.480555    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:04:24.480572    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:04:24.480577    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:04:24.485654    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:04:24.485660    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:04:24.500489    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:04:24.500498    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:04:24.512874    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:04:24.512885    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:04:24.524782    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:04:24.524794    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:04:24.555680    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:04:24.555687    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:04:24.566798    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:04:24.566810    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:04:24.581903    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:04:24.581915    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:04:24.599458    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:04:24.599471    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:04:24.618167    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:04:24.618177    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:04:24.641643    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:04:24.641653    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:04:24.653189    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:04:24.653199    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:04:24.689271    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:04:24.689282    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:04:24.704037    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:04:24.704047    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:04:24.715485    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:04:24.715499    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:04:27.229007    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:32.231428    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:32.231672    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:04:32.263942    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:04:32.264068    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:04:32.288367    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:04:32.288508    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:04:32.308983    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:04:32.309054    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:04:32.320840    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:04:32.320910    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:04:32.331931    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:04:32.331997    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:04:32.342120    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:04:32.342177    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:04:32.352477    4539 logs.go:276] 0 containers: []
	W0806 01:04:32.352492    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:04:32.352540    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:04:32.363361    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:04:32.363382    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:04:32.363386    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:04:32.375346    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:04:32.375359    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:04:32.386970    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:04:32.386983    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:04:32.409454    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:04:32.409463    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:04:32.420449    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:04:32.420458    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:04:32.431890    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:04:32.431898    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:04:32.463914    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:04:32.463922    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:04:32.498020    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:04:32.498030    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:04:32.512584    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:04:32.512594    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:04:32.526697    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:04:32.526707    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:04:32.541612    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:04:32.541622    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:04:32.562020    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:04:32.562033    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:04:32.588470    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:04:32.588481    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:04:32.595113    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:04:32.595125    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:04:32.607658    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:04:32.607668    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:04:35.128007    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:40.129934    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:40.130304    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:04:40.161377    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:04:40.161505    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:04:40.185295    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:04:40.185383    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:04:40.199123    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:04:40.199191    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:04:40.210714    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:04:40.210777    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:04:40.221285    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:04:40.221345    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:04:40.231543    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:04:40.231612    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:04:40.241934    4539 logs.go:276] 0 containers: []
	W0806 01:04:40.241946    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:04:40.242002    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:04:40.252758    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:04:40.252780    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:04:40.252785    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:04:40.285006    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:04:40.285014    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:04:40.299289    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:04:40.299300    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:04:40.313357    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:04:40.313367    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:04:40.325364    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:04:40.325375    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:04:40.337451    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:04:40.337464    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:04:40.350025    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:04:40.350039    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:04:40.361659    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:04:40.361672    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:04:40.384768    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:04:40.384775    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:04:40.389295    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:04:40.389301    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:04:40.401250    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:04:40.401261    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:04:40.418164    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:04:40.418176    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:04:40.433613    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:04:40.433624    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:04:40.469131    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:04:40.469145    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:04:40.484414    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:04:40.484427    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:04:43.003616    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:48.005825    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:48.006211    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0806 01:04:48.042783    4539 logs.go:276] 1 containers: [309831a81e1f]
	I0806 01:04:48.042939    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0806 01:04:48.063507    4539 logs.go:276] 1 containers: [07576aa30f53]
	I0806 01:04:48.063595    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0806 01:04:48.078448    4539 logs.go:276] 4 containers: [582e8f5b34eb db665579f68e ee35e15fafe0 964f8ef4b02d]
	I0806 01:04:48.078516    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0806 01:04:48.090583    4539 logs.go:276] 1 containers: [c3e8b8d64dad]
	I0806 01:04:48.090644    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0806 01:04:48.101068    4539 logs.go:276] 1 containers: [a3c272a1667c]
	I0806 01:04:48.101129    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0806 01:04:48.111675    4539 logs.go:276] 1 containers: [09e30a58d2e0]
	I0806 01:04:48.111735    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0806 01:04:48.121840    4539 logs.go:276] 0 containers: []
	W0806 01:04:48.121850    4539 logs.go:278] No container was found matching "kindnet"
	I0806 01:04:48.121898    4539 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0806 01:04:48.132704    4539 logs.go:276] 1 containers: [7b5896e91f5c]
	I0806 01:04:48.132723    4539 logs.go:123] Gathering logs for coredns [ee35e15fafe0] ...
	I0806 01:04:48.132728    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee35e15fafe0"
	I0806 01:04:48.144966    4539 logs.go:123] Gathering logs for kube-controller-manager [09e30a58d2e0] ...
	I0806 01:04:48.144980    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09e30a58d2e0"
	I0806 01:04:48.162183    4539 logs.go:123] Gathering logs for storage-provisioner [7b5896e91f5c] ...
	I0806 01:04:48.162192    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b5896e91f5c"
	I0806 01:04:48.174118    4539 logs.go:123] Gathering logs for etcd [07576aa30f53] ...
	I0806 01:04:48.174128    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07576aa30f53"
	I0806 01:04:48.188210    4539 logs.go:123] Gathering logs for coredns [582e8f5b34eb] ...
	I0806 01:04:48.188218    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 582e8f5b34eb"
	I0806 01:04:48.199592    4539 logs.go:123] Gathering logs for kube-proxy [a3c272a1667c] ...
	I0806 01:04:48.199603    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c272a1667c"
	I0806 01:04:48.210974    4539 logs.go:123] Gathering logs for container status ...
	I0806 01:04:48.210983    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0806 01:04:48.224153    4539 logs.go:123] Gathering logs for kubelet ...
	I0806 01:04:48.224164    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 01:04:48.255155    4539 logs.go:123] Gathering logs for describe nodes ...
	I0806 01:04:48.255164    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0806 01:04:48.290092    4539 logs.go:123] Gathering logs for kube-apiserver [309831a81e1f] ...
	I0806 01:04:48.290103    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 309831a81e1f"
	I0806 01:04:48.304429    4539 logs.go:123] Gathering logs for dmesg ...
	I0806 01:04:48.304443    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 01:04:48.309224    4539 logs.go:123] Gathering logs for coredns [db665579f68e] ...
	I0806 01:04:48.309233    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 db665579f68e"
	I0806 01:04:48.320746    4539 logs.go:123] Gathering logs for coredns [964f8ef4b02d] ...
	I0806 01:04:48.320756    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 964f8ef4b02d"
	I0806 01:04:48.332761    4539 logs.go:123] Gathering logs for kube-scheduler [c3e8b8d64dad] ...
	I0806 01:04:48.332773    4539 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3e8b8d64dad"
	I0806 01:04:48.350213    4539 logs.go:123] Gathering logs for Docker ...
	I0806 01:04:48.350225    4539 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0806 01:04:50.874866    4539 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0806 01:04:55.877053    4539 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0806 01:04:55.885499    4539 out.go:177] 
	W0806 01:04:55.889486    4539 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0806 01:04:55.889502    4539 out.go:239] * 
	* 
	W0806 01:04:55.889987    4539 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:04:55.906389    4539 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (571.93s)

                                                
                                    
x
+
TestPause/serial/Start (10.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-053000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-053000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.019635458s)

                                                
                                                
-- stdout --
	* [pause-053000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-053000" primary control-plane node in "pause-053000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-053000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-053000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-053000 -n pause-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-053000 -n pause-053000: exit status 7 (29.64825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-053000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-825000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-825000 --driver=qemu2 : exit status 80 (9.9906005s)

                                                
                                                
-- stdout --
	* [NoKubernetes-825000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-825000" primary control-plane node in "NoKubernetes-825000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-825000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-825000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-825000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000: exit status 7 (64.820958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232506083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-825000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-825000
	* Restarting existing qemu2 VM for "NoKubernetes-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-825000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000: exit status 7 (30.359667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --driver=qemu2 : exit status 80 (5.240063333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-825000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-825000
	* Restarting existing qemu2 VM for "NoKubernetes-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-825000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000: exit status 7 (50.308916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-825000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-825000 --driver=qemu2 : exit status 80 (5.2600125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-825000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-825000
	* Restarting existing qemu2 VM for "NoKubernetes-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-825000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-825000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-825000 -n NoKubernetes-825000: exit status 7 (30.749959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.799240166s)

                                                
                                                
-- stdout --
	* [auto-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-187000" primary control-plane node in "auto-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:03:06.848086    5110 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:03:06.848206    5110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:06.848209    5110 out.go:304] Setting ErrFile to fd 2...
	I0806 01:03:06.848212    5110 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:06.848335    5110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:03:06.849541    5110 out.go:298] Setting JSON to false
	I0806 01:03:06.866126    5110 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3754,"bootTime":1722927632,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:03:06.866205    5110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:03:06.872793    5110 out.go:177] * [auto-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:03:06.879676    5110 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:03:06.879692    5110 notify.go:220] Checking for updates...
	I0806 01:03:06.886649    5110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:03:06.889689    5110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:03:06.892681    5110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:03:06.895690    5110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:03:06.898646    5110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:03:06.902042    5110 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:03:06.902110    5110 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:03:06.902158    5110 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:03:06.906616    5110 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:03:06.913667    5110 start.go:297] selected driver: qemu2
	I0806 01:03:06.913675    5110 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:03:06.913682    5110 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:03:06.916043    5110 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:03:06.918620    5110 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:03:06.921646    5110 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:03:06.921678    5110 cni.go:84] Creating CNI manager for ""
	I0806 01:03:06.921685    5110 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:03:06.921689    5110 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:03:06.921712    5110 start.go:340] cluster config:
	{Name:auto-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:03:06.925395    5110 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:03:06.932628    5110 out.go:177] * Starting "auto-187000" primary control-plane node in "auto-187000" cluster
	I0806 01:03:06.936568    5110 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:03:06.936583    5110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:03:06.936591    5110 cache.go:56] Caching tarball of preloaded images
	I0806 01:03:06.936654    5110 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:03:06.936659    5110 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:03:06.936712    5110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/auto-187000/config.json ...
	I0806 01:03:06.936723    5110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/auto-187000/config.json: {Name:mka36fe63048147eff83e610fcb278124acc7e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:03:06.936938    5110 start.go:360] acquireMachinesLock for auto-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:06.936971    5110 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "auto-187000"
	I0806 01:03:06.936980    5110 start.go:93] Provisioning new machine with config: &{Name:auto-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:06.937018    5110 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:06.945645    5110 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:06.960894    5110 start.go:159] libmachine.API.Create for "auto-187000" (driver="qemu2")
	I0806 01:03:06.960926    5110 client.go:168] LocalClient.Create starting
	I0806 01:03:06.960991    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:06.961021    5110 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:06.961029    5110 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:06.961069    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:06.961092    5110 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:06.961098    5110 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:06.961458    5110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:07.113801    5110 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:07.154664    5110 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:07.154672    5110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:07.154849    5110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2
	I0806 01:03:07.164031    5110 main.go:141] libmachine: STDOUT: 
	I0806 01:03:07.164050    5110 main.go:141] libmachine: STDERR: 
	I0806 01:03:07.164110    5110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2 +20000M
	I0806 01:03:07.171848    5110 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:07.171864    5110 main.go:141] libmachine: STDERR: 
	I0806 01:03:07.171886    5110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2
	I0806 01:03:07.171889    5110 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:07.171897    5110 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:07.171923    5110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:a3:08:78:70:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2
	I0806 01:03:07.173487    5110 main.go:141] libmachine: STDOUT: 
	I0806 01:03:07.173515    5110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:07.173533    5110 client.go:171] duration metric: took 212.604125ms to LocalClient.Create
	I0806 01:03:09.175733    5110 start.go:128] duration metric: took 2.238695917s to createHost
	I0806 01:03:09.175799    5110 start.go:83] releasing machines lock for "auto-187000", held for 2.23883375s
	W0806 01:03:09.175868    5110 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:09.192809    5110 out.go:177] * Deleting "auto-187000" in qemu2 ...
	W0806 01:03:09.221062    5110 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:09.221098    5110 start.go:729] Will try again in 5 seconds ...
	I0806 01:03:14.223258    5110 start.go:360] acquireMachinesLock for auto-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:14.223870    5110 start.go:364] duration metric: took 511.583µs to acquireMachinesLock for "auto-187000"
	I0806 01:03:14.223978    5110 start.go:93] Provisioning new machine with config: &{Name:auto-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:14.224290    5110 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:14.234120    5110 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:14.281756    5110 start.go:159] libmachine.API.Create for "auto-187000" (driver="qemu2")
	I0806 01:03:14.281812    5110 client.go:168] LocalClient.Create starting
	I0806 01:03:14.281936    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:14.282007    5110 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:14.282024    5110 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:14.282103    5110 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:14.282149    5110 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:14.282164    5110 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:14.282758    5110 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:14.444441    5110 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:14.559054    5110 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:14.559063    5110 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:14.559275    5110 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2
	I0806 01:03:14.568929    5110 main.go:141] libmachine: STDOUT: 
	I0806 01:03:14.568950    5110 main.go:141] libmachine: STDERR: 
	I0806 01:03:14.569008    5110 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2 +20000M
	I0806 01:03:14.576943    5110 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:14.576963    5110 main.go:141] libmachine: STDERR: 
	I0806 01:03:14.576974    5110 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2
	I0806 01:03:14.576978    5110 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:14.576984    5110 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:14.577006    5110 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:8c:5d:78:e1:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/auto-187000/disk.qcow2
	I0806 01:03:14.578734    5110 main.go:141] libmachine: STDOUT: 
	I0806 01:03:14.578750    5110 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:14.578761    5110 client.go:171] duration metric: took 296.94475ms to LocalClient.Create
	I0806 01:03:16.580936    5110 start.go:128] duration metric: took 2.356630584s to createHost
	I0806 01:03:16.581004    5110 start.go:83] releasing machines lock for "auto-187000", held for 2.357091375s
	W0806 01:03:16.581403    5110 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:16.594101    5110 out.go:177] 
	W0806 01:03:16.598168    5110 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:03:16.598236    5110 out.go:239] * 
	* 
	W0806 01:03:16.600432    5110 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:03:16.609111    5110 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.985405459s)

                                                
                                                
-- stdout --
	* [kindnet-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-187000" primary control-plane node in "kindnet-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:03:18.766512    5219 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:03:18.766657    5219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:18.766660    5219 out.go:304] Setting ErrFile to fd 2...
	I0806 01:03:18.766662    5219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:18.766791    5219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:03:18.767830    5219 out.go:298] Setting JSON to false
	I0806 01:03:18.784694    5219 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3766,"bootTime":1722927632,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:03:18.784758    5219 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:03:18.789913    5219 out.go:177] * [kindnet-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:03:18.796698    5219 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:03:18.796768    5219 notify.go:220] Checking for updates...
	I0806 01:03:18.804755    5219 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:03:18.807692    5219 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:03:18.810737    5219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:03:18.813699    5219 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:03:18.816710    5219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:03:18.820063    5219 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:03:18.820128    5219 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:03:18.820203    5219 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:03:18.824669    5219 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:03:18.831725    5219 start.go:297] selected driver: qemu2
	I0806 01:03:18.831731    5219 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:03:18.831736    5219 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:03:18.833814    5219 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:03:18.836623    5219 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:03:18.839786    5219 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:03:18.839807    5219 cni.go:84] Creating CNI manager for "kindnet"
	I0806 01:03:18.839817    5219 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 01:03:18.839859    5219 start.go:340] cluster config:
	{Name:kindnet-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:03:18.843273    5219 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:03:18.850701    5219 out.go:177] * Starting "kindnet-187000" primary control-plane node in "kindnet-187000" cluster
	I0806 01:03:18.854708    5219 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:03:18.854724    5219 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:03:18.854735    5219 cache.go:56] Caching tarball of preloaded images
	I0806 01:03:18.854804    5219 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:03:18.854817    5219 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:03:18.854888    5219 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/kindnet-187000/config.json ...
	I0806 01:03:18.854898    5219 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/kindnet-187000/config.json: {Name:mk6010bdb2ca822b69d9d1088809b7ae4112f448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:03:18.855107    5219 start.go:360] acquireMachinesLock for kindnet-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:18.855139    5219 start.go:364] duration metric: took 26.084µs to acquireMachinesLock for "kindnet-187000"
	I0806 01:03:18.855149    5219 start.go:93] Provisioning new machine with config: &{Name:kindnet-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:18.855171    5219 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:18.863744    5219 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:18.879843    5219 start.go:159] libmachine.API.Create for "kindnet-187000" (driver="qemu2")
	I0806 01:03:18.879876    5219 client.go:168] LocalClient.Create starting
	I0806 01:03:18.879943    5219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:18.879975    5219 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:18.879988    5219 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:18.880024    5219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:18.880047    5219 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:18.880057    5219 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:18.880464    5219 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:19.032447    5219 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:19.298210    5219 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:19.298224    5219 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:19.298440    5219 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2
	I0806 01:03:19.308137    5219 main.go:141] libmachine: STDOUT: 
	I0806 01:03:19.308160    5219 main.go:141] libmachine: STDERR: 
	I0806 01:03:19.308216    5219 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2 +20000M
	I0806 01:03:19.316206    5219 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:19.316236    5219 main.go:141] libmachine: STDERR: 
	I0806 01:03:19.316259    5219 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2
	I0806 01:03:19.316265    5219 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:19.316275    5219 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:19.316301    5219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:59:05:c0:4d:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2
	I0806 01:03:19.317857    5219 main.go:141] libmachine: STDOUT: 
	I0806 01:03:19.317870    5219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:19.317888    5219 client.go:171] duration metric: took 438.008875ms to LocalClient.Create
	I0806 01:03:21.320096    5219 start.go:128] duration metric: took 2.464906667s to createHost
	I0806 01:03:21.320171    5219 start.go:83] releasing machines lock for "kindnet-187000", held for 2.465038916s
	W0806 01:03:21.320258    5219 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:21.329600    5219 out.go:177] * Deleting "kindnet-187000" in qemu2 ...
	W0806 01:03:21.351821    5219 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:21.351848    5219 start.go:729] Will try again in 5 seconds ...
	I0806 01:03:26.354111    5219 start.go:360] acquireMachinesLock for kindnet-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:26.354589    5219 start.go:364] duration metric: took 340.709µs to acquireMachinesLock for "kindnet-187000"
	I0806 01:03:26.354740    5219 start.go:93] Provisioning new machine with config: &{Name:kindnet-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:26.355025    5219 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:26.364696    5219 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:26.414097    5219 start.go:159] libmachine.API.Create for "kindnet-187000" (driver="qemu2")
	I0806 01:03:26.414161    5219 client.go:168] LocalClient.Create starting
	I0806 01:03:26.414278    5219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:26.414383    5219 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:26.414403    5219 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:26.414464    5219 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:26.414514    5219 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:26.414526    5219 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:26.415105    5219 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:26.577071    5219 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:26.660176    5219 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:26.660186    5219 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:26.660403    5219 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2
	I0806 01:03:26.669806    5219 main.go:141] libmachine: STDOUT: 
	I0806 01:03:26.669827    5219 main.go:141] libmachine: STDERR: 
	I0806 01:03:26.669891    5219 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2 +20000M
	I0806 01:03:26.677985    5219 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:26.678002    5219 main.go:141] libmachine: STDERR: 
	I0806 01:03:26.678015    5219 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2
	I0806 01:03:26.678019    5219 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:26.678030    5219 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:26.678057    5219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:0d:12:b8:5f:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kindnet-187000/disk.qcow2
	I0806 01:03:26.679757    5219 main.go:141] libmachine: STDOUT: 
	I0806 01:03:26.679773    5219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:26.679786    5219 client.go:171] duration metric: took 265.622417ms to LocalClient.Create
	I0806 01:03:28.682063    5219 start.go:128] duration metric: took 2.326989791s to createHost
	I0806 01:03:28.682205    5219 start.go:83] releasing machines lock for "kindnet-187000", held for 2.327608708s
	W0806 01:03:28.682542    5219 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:28.695148    5219 out.go:177] 
	W0806 01:03:28.698199    5219 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:03:28.698221    5219 out.go:239] * 
	* 
	W0806 01:03:28.700233    5219 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:03:28.709067    5219 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0806 01:03:35.457089    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.932137834s)

                                                
                                                
-- stdout --
	* [flannel-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-187000" primary control-plane node in "flannel-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:03:30.972845    5340 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:03:30.972974    5340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:30.972978    5340 out.go:304] Setting ErrFile to fd 2...
	I0806 01:03:30.972980    5340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:30.973114    5340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:03:30.974157    5340 out.go:298] Setting JSON to false
	I0806 01:03:30.990615    5340 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3778,"bootTime":1722927632,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:03:30.990689    5340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:03:30.996383    5340 out.go:177] * [flannel-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:03:31.003376    5340 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:03:31.003410    5340 notify.go:220] Checking for updates...
	I0806 01:03:31.010463    5340 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:03:31.013481    5340 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:03:31.016386    5340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:03:31.019419    5340 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:03:31.022486    5340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:03:31.024180    5340 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:03:31.024247    5340 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:03:31.024291    5340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:03:31.028358    5340 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:03:31.035238    5340 start.go:297] selected driver: qemu2
	I0806 01:03:31.035243    5340 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:03:31.035258    5340 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:03:31.037419    5340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:03:31.040371    5340 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:03:31.043533    5340 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:03:31.043552    5340 cni.go:84] Creating CNI manager for "flannel"
	I0806 01:03:31.043561    5340 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0806 01:03:31.043602    5340 start.go:340] cluster config:
	{Name:flannel-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:03:31.047062    5340 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:03:31.054426    5340 out.go:177] * Starting "flannel-187000" primary control-plane node in "flannel-187000" cluster
	I0806 01:03:31.058416    5340 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:03:31.058429    5340 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:03:31.058437    5340 cache.go:56] Caching tarball of preloaded images
	I0806 01:03:31.058492    5340 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:03:31.058497    5340 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:03:31.058558    5340 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/flannel-187000/config.json ...
	I0806 01:03:31.058568    5340 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/flannel-187000/config.json: {Name:mk2472353329a145dfd47efe1b3dc134735adf9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:03:31.058777    5340 start.go:360] acquireMachinesLock for flannel-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:31.058807    5340 start.go:364] duration metric: took 24.833µs to acquireMachinesLock for "flannel-187000"
	I0806 01:03:31.058816    5340 start.go:93] Provisioning new machine with config: &{Name:flannel-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:31.058844    5340 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:31.067457    5340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:31.082356    5340 start.go:159] libmachine.API.Create for "flannel-187000" (driver="qemu2")
	I0806 01:03:31.082382    5340 client.go:168] LocalClient.Create starting
	I0806 01:03:31.082443    5340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:31.082472    5340 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:31.082481    5340 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:31.082519    5340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:31.082542    5340 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:31.082551    5340 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:31.082904    5340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:31.237406    5340 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:31.333700    5340 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:31.333706    5340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:31.333893    5340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2
	I0806 01:03:31.343269    5340 main.go:141] libmachine: STDOUT: 
	I0806 01:03:31.343287    5340 main.go:141] libmachine: STDERR: 
	I0806 01:03:31.343337    5340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2 +20000M
	I0806 01:03:31.351096    5340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:31.351109    5340 main.go:141] libmachine: STDERR: 
	I0806 01:03:31.351131    5340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2
	I0806 01:03:31.351137    5340 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:31.351148    5340 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:31.351171    5340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:28:11:e4:1b:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2
	I0806 01:03:31.352678    5340 main.go:141] libmachine: STDOUT: 
	I0806 01:03:31.352694    5340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:31.352714    5340 client.go:171] duration metric: took 270.329875ms to LocalClient.Create
	I0806 01:03:33.355038    5340 start.go:128] duration metric: took 2.296094333s to createHost
	I0806 01:03:33.355138    5340 start.go:83] releasing machines lock for "flannel-187000", held for 2.296336167s
	W0806 01:03:33.355188    5340 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:33.371423    5340 out.go:177] * Deleting "flannel-187000" in qemu2 ...
	W0806 01:03:33.397193    5340 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:33.397223    5340 start.go:729] Will try again in 5 seconds ...
	I0806 01:03:38.399538    5340 start.go:360] acquireMachinesLock for flannel-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:38.400127    5340 start.go:364] duration metric: took 476.583µs to acquireMachinesLock for "flannel-187000"
	I0806 01:03:38.400205    5340 start.go:93] Provisioning new machine with config: &{Name:flannel-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:38.400584    5340 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:38.403607    5340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:38.455073    5340 start.go:159] libmachine.API.Create for "flannel-187000" (driver="qemu2")
	I0806 01:03:38.455126    5340 client.go:168] LocalClient.Create starting
	I0806 01:03:38.455248    5340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:38.455332    5340 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:38.455348    5340 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:38.455407    5340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:38.455451    5340 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:38.455469    5340 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:38.456010    5340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:38.621069    5340 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:38.814581    5340 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:38.814591    5340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:38.814829    5340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2
	I0806 01:03:38.825055    5340 main.go:141] libmachine: STDOUT: 
	I0806 01:03:38.825077    5340 main.go:141] libmachine: STDERR: 
	I0806 01:03:38.825134    5340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2 +20000M
	I0806 01:03:38.833172    5340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:38.833187    5340 main.go:141] libmachine: STDERR: 
	I0806 01:03:38.833200    5340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2
	I0806 01:03:38.833203    5340 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:38.833214    5340 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:38.833246    5340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:2e:6a:d1:c3:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/flannel-187000/disk.qcow2
	I0806 01:03:38.834905    5340 main.go:141] libmachine: STDOUT: 
	I0806 01:03:38.834921    5340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:38.834934    5340 client.go:171] duration metric: took 379.805125ms to LocalClient.Create
	I0806 01:03:40.837155    5340 start.go:128] duration metric: took 2.436536458s to createHost
	I0806 01:03:40.837291    5340 start.go:83] releasing machines lock for "flannel-187000", held for 2.43714225s
	W0806 01:03:40.837683    5340 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:40.847289    5340 out.go:177] 
	W0806 01:03:40.851362    5340 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:03:40.851384    5340 out.go:239] * 
	* 
	W0806 01:03:40.853683    5340 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:03:40.862426    5340 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.795093375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-187000" primary control-plane node in "enable-default-cni-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:03:43.268573    5461 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:03:43.268715    5461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:43.268720    5461 out.go:304] Setting ErrFile to fd 2...
	I0806 01:03:43.268722    5461 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:43.268833    5461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:03:43.269869    5461 out.go:298] Setting JSON to false
	I0806 01:03:43.286634    5461 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3791,"bootTime":1722927632,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:03:43.286732    5461 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:03:43.293010    5461 out.go:177] * [enable-default-cni-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:03:43.299840    5461 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:03:43.299921    5461 notify.go:220] Checking for updates...
	I0806 01:03:43.306856    5461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:03:43.309877    5461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:03:43.312818    5461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:03:43.315860    5461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:03:43.318819    5461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:03:43.322117    5461 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:03:43.322188    5461 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:03:43.322234    5461 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:03:43.325802    5461 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:03:43.332775    5461 start.go:297] selected driver: qemu2
	I0806 01:03:43.332780    5461 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:03:43.332786    5461 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:03:43.334961    5461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:03:43.337788    5461 out.go:177] * Automatically selected the socket_vmnet network
	E0806 01:03:43.340819    5461 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0806 01:03:43.340830    5461 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:03:43.340842    5461 cni.go:84] Creating CNI manager for "bridge"
	I0806 01:03:43.340846    5461 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:03:43.340878    5461 start.go:340] cluster config:
	{Name:enable-default-cni-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:03:43.344287    5461 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:03:43.351815    5461 out.go:177] * Starting "enable-default-cni-187000" primary control-plane node in "enable-default-cni-187000" cluster
	I0806 01:03:43.355744    5461 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:03:43.355757    5461 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:03:43.355765    5461 cache.go:56] Caching tarball of preloaded images
	I0806 01:03:43.355820    5461 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:03:43.355825    5461 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:03:43.355886    5461 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/enable-default-cni-187000/config.json ...
	I0806 01:03:43.355896    5461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/enable-default-cni-187000/config.json: {Name:mk1de3107947a3223e78d70b66b92de7e11246e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:03:43.356293    5461 start.go:360] acquireMachinesLock for enable-default-cni-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:43.356328    5461 start.go:364] duration metric: took 26µs to acquireMachinesLock for "enable-default-cni-187000"
	I0806 01:03:43.356337    5461 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:43.356365    5461 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:43.360873    5461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:43.376611    5461 start.go:159] libmachine.API.Create for "enable-default-cni-187000" (driver="qemu2")
	I0806 01:03:43.376639    5461 client.go:168] LocalClient.Create starting
	I0806 01:03:43.376705    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:43.376735    5461 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:43.376749    5461 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:43.376793    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:43.376816    5461 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:43.376826    5461 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:43.377282    5461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:43.530078    5461 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:43.557792    5461 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:43.557797    5461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:43.557979    5461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2
	I0806 01:03:43.567046    5461 main.go:141] libmachine: STDOUT: 
	I0806 01:03:43.567066    5461 main.go:141] libmachine: STDERR: 
	I0806 01:03:43.567132    5461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2 +20000M
	I0806 01:03:43.574859    5461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:43.574873    5461 main.go:141] libmachine: STDERR: 
	I0806 01:03:43.574887    5461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2
	I0806 01:03:43.574895    5461 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:43.574905    5461 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:43.574931    5461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:de:31:5b:82:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2
	I0806 01:03:43.576524    5461 main.go:141] libmachine: STDOUT: 
	I0806 01:03:43.576540    5461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:43.576566    5461 client.go:171] duration metric: took 199.923209ms to LocalClient.Create
	I0806 01:03:45.578651    5461 start.go:128] duration metric: took 2.222289166s to createHost
	I0806 01:03:45.578676    5461 start.go:83] releasing machines lock for "enable-default-cni-187000", held for 2.222356333s
	W0806 01:03:45.578748    5461 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:45.588778    5461 out.go:177] * Deleting "enable-default-cni-187000" in qemu2 ...
	W0806 01:03:45.608276    5461 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:45.608306    5461 start.go:729] Will try again in 5 seconds ...
	I0806 01:03:50.610491    5461 start.go:360] acquireMachinesLock for enable-default-cni-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:50.610856    5461 start.go:364] duration metric: took 272.125µs to acquireMachinesLock for "enable-default-cni-187000"
	I0806 01:03:50.611006    5461 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:50.611259    5461 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:50.620858    5461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:50.662603    5461 start.go:159] libmachine.API.Create for "enable-default-cni-187000" (driver="qemu2")
	I0806 01:03:50.662652    5461 client.go:168] LocalClient.Create starting
	I0806 01:03:50.662762    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:50.662813    5461 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:50.662828    5461 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:50.662885    5461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:50.662924    5461 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:50.662936    5461 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:50.663415    5461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:50.824753    5461 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:50.980177    5461 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:50.980188    5461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:50.980405    5461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2
	I0806 01:03:50.990645    5461 main.go:141] libmachine: STDOUT: 
	I0806 01:03:50.990668    5461 main.go:141] libmachine: STDERR: 
	I0806 01:03:50.990741    5461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2 +20000M
	I0806 01:03:50.999497    5461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:50.999516    5461 main.go:141] libmachine: STDERR: 
	I0806 01:03:50.999530    5461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2
	I0806 01:03:50.999539    5461 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:50.999550    5461 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:50.999580    5461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:d4:13:05:27:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/enable-default-cni-187000/disk.qcow2
	I0806 01:03:51.001355    5461 main.go:141] libmachine: STDOUT: 
	I0806 01:03:51.001374    5461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:51.001387    5461 client.go:171] duration metric: took 338.732916ms to LocalClient.Create
	I0806 01:03:53.001461    5461 start.go:128] duration metric: took 2.390193208s to createHost
	I0806 01:03:53.001473    5461 start.go:83] releasing machines lock for "enable-default-cni-187000", held for 2.390598625s
	W0806 01:03:53.001571    5461 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:53.012591    5461 out.go:177] 
	W0806 01:03:53.015564    5461 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:03:53.015578    5461 out.go:239] * 
	* 
	W0806 01:03:53.016065    5461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:03:53.028568    5461 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.93759925s)

                                                
                                                
-- stdout --
	* [bridge-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-187000" primary control-plane node in "bridge-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:03:55.191107    5576 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:03:55.191246    5576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:55.191249    5576 out.go:304] Setting ErrFile to fd 2...
	I0806 01:03:55.191252    5576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:03:55.191356    5576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:03:55.192472    5576 out.go:298] Setting JSON to false
	I0806 01:03:55.208937    5576 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3803,"bootTime":1722927632,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:03:55.209007    5576 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:03:55.212528    5576 out.go:177] * [bridge-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:03:55.221565    5576 notify.go:220] Checking for updates...
	I0806 01:03:55.224457    5576 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:03:55.228473    5576 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:03:55.232540    5576 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:03:55.236425    5576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:03:55.240497    5576 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:03:55.244472    5576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:03:55.248728    5576 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:03:55.248794    5576 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:03:55.248846    5576 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:03:55.252565    5576 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:03:55.259456    5576 start.go:297] selected driver: qemu2
	I0806 01:03:55.259462    5576 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:03:55.259468    5576 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:03:55.261794    5576 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:03:55.265511    5576 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:03:55.269528    5576 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:03:55.269561    5576 cni.go:84] Creating CNI manager for "bridge"
	I0806 01:03:55.269565    5576 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:03:55.269597    5576 start.go:340] cluster config:
	{Name:bridge-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:03:55.273183    5576 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:03:55.281341    5576 out.go:177] * Starting "bridge-187000" primary control-plane node in "bridge-187000" cluster
	I0806 01:03:55.285485    5576 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:03:55.285498    5576 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:03:55.285505    5576 cache.go:56] Caching tarball of preloaded images
	I0806 01:03:55.285551    5576 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:03:55.285556    5576 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:03:55.285607    5576 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/bridge-187000/config.json ...
	I0806 01:03:55.285616    5576 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/bridge-187000/config.json: {Name:mk06c595fedc33f49464071fdca0264dac4767bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:03:55.285996    5576 start.go:360] acquireMachinesLock for bridge-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:03:55.286025    5576 start.go:364] duration metric: took 23.916µs to acquireMachinesLock for "bridge-187000"
	I0806 01:03:55.286033    5576 start.go:93] Provisioning new machine with config: &{Name:bridge-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:03:55.286068    5576 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:03:55.293449    5576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:03:55.308936    5576 start.go:159] libmachine.API.Create for "bridge-187000" (driver="qemu2")
	I0806 01:03:55.308961    5576 client.go:168] LocalClient.Create starting
	I0806 01:03:55.309021    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:03:55.309057    5576 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:55.309066    5576 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:55.309103    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:03:55.309129    5576 main.go:141] libmachine: Decoding PEM data...
	I0806 01:03:55.309138    5576 main.go:141] libmachine: Parsing certificate...
	I0806 01:03:55.309663    5576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:03:55.463125    5576 main.go:141] libmachine: Creating SSH key...
	I0806 01:03:55.608565    5576 main.go:141] libmachine: Creating Disk image...
	I0806 01:03:55.608573    5576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:03:55.608790    5576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2
	I0806 01:03:55.618370    5576 main.go:141] libmachine: STDOUT: 
	I0806 01:03:55.618393    5576 main.go:141] libmachine: STDERR: 
	I0806 01:03:55.618437    5576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2 +20000M
	I0806 01:03:55.626558    5576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:03:55.626574    5576 main.go:141] libmachine: STDERR: 
	I0806 01:03:55.626588    5576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2
	I0806 01:03:55.626596    5576 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:03:55.626608    5576 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:03:55.626631    5576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c5:2a:a9:eb:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2
	I0806 01:03:55.628255    5576 main.go:141] libmachine: STDOUT: 
	I0806 01:03:55.628275    5576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:03:55.628293    5576 client.go:171] duration metric: took 319.329834ms to LocalClient.Create
	I0806 01:03:57.630485    5576 start.go:128] duration metric: took 2.344402917s to createHost
	I0806 01:03:57.630592    5576 start.go:83] releasing machines lock for "bridge-187000", held for 2.344572834s
	W0806 01:03:57.630684    5576 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:57.644943    5576 out.go:177] * Deleting "bridge-187000" in qemu2 ...
	W0806 01:03:57.669780    5576 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:03:57.669807    5576 start.go:729] Will try again in 5 seconds ...
	I0806 01:04:02.671942    5576 start.go:360] acquireMachinesLock for bridge-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:02.672252    5576 start.go:364] duration metric: took 240.042µs to acquireMachinesLock for "bridge-187000"
	I0806 01:04:02.672295    5576 start.go:93] Provisioning new machine with config: &{Name:bridge-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:02.672399    5576 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:02.683825    5576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:02.716705    5576 start.go:159] libmachine.API.Create for "bridge-187000" (driver="qemu2")
	I0806 01:04:02.716750    5576 client.go:168] LocalClient.Create starting
	I0806 01:04:02.716846    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:02.716897    5576 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:02.716909    5576 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:02.716964    5576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:02.716997    5576 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:02.717016    5576 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:02.717433    5576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:02.876507    5576 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:03.038180    5576 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:03.038191    5576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:03.038413    5576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2
	I0806 01:04:03.048864    5576 main.go:141] libmachine: STDOUT: 
	I0806 01:04:03.048884    5576 main.go:141] libmachine: STDERR: 
	I0806 01:04:03.048947    5576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2 +20000M
	I0806 01:04:03.057813    5576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:03.057832    5576 main.go:141] libmachine: STDERR: 
	I0806 01:04:03.057848    5576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2
	I0806 01:04:03.057853    5576 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:03.057869    5576 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:03.057908    5576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:8f:01:a3:b2:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/bridge-187000/disk.qcow2
	I0806 01:04:03.059897    5576 main.go:141] libmachine: STDOUT: 
	I0806 01:04:03.059916    5576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:03.059929    5576 client.go:171] duration metric: took 343.176542ms to LocalClient.Create
	I0806 01:04:05.062007    5576 start.go:128] duration metric: took 2.389612125s to createHost
	I0806 01:04:05.062038    5576 start.go:83] releasing machines lock for "bridge-187000", held for 2.389790667s
	W0806 01:04:05.062144    5576 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:05.077070    5576 out.go:177] 
	W0806 01:04:05.080009    5576 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:04:05.080023    5576 out.go:239] * 
	* 
	W0806 01:04:05.081043    5576 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:04:05.090056    5576 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.804080417s)

                                                
                                                
-- stdout --
	* [kubenet-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-187000" primary control-plane node in "kubenet-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:04:07.258578    5687 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:04:07.258705    5687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:07.258708    5687 out.go:304] Setting ErrFile to fd 2...
	I0806 01:04:07.258711    5687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:07.258853    5687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:04:07.259864    5687 out.go:298] Setting JSON to false
	I0806 01:04:07.276312    5687 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3815,"bootTime":1722927632,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:04:07.276385    5687 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:04:07.280949    5687 out.go:177] * [kubenet-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:04:07.287904    5687 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:04:07.287983    5687 notify.go:220] Checking for updates...
	I0806 01:04:07.294905    5687 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:04:07.297915    5687 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:04:07.300933    5687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:04:07.303904    5687 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:04:07.306939    5687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:04:07.310302    5687 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:04:07.310371    5687 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:04:07.310425    5687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:04:07.314917    5687 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:04:07.321879    5687 start.go:297] selected driver: qemu2
	I0806 01:04:07.321885    5687 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:04:07.321892    5687 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:04:07.324225    5687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:04:07.327825    5687 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:04:07.331002    5687 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:04:07.331047    5687 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0806 01:04:07.331070    5687 start.go:340] cluster config:
	{Name:kubenet-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:04:07.334547    5687 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:04:07.340850    5687 out.go:177] * Starting "kubenet-187000" primary control-plane node in "kubenet-187000" cluster
	I0806 01:04:07.344850    5687 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:04:07.344862    5687 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:04:07.344870    5687 cache.go:56] Caching tarball of preloaded images
	I0806 01:04:07.344922    5687 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:04:07.344927    5687 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:04:07.344985    5687 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/kubenet-187000/config.json ...
	I0806 01:04:07.344999    5687 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/kubenet-187000/config.json: {Name:mka388879d348ae76741e217b85c4f505f852325 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:04:07.345220    5687 start.go:360] acquireMachinesLock for kubenet-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:07.345251    5687 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "kubenet-187000"
	I0806 01:04:07.345261    5687 start.go:93] Provisioning new machine with config: &{Name:kubenet-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:07.345292    5687 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:07.353902    5687 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:07.370004    5687 start.go:159] libmachine.API.Create for "kubenet-187000" (driver="qemu2")
	I0806 01:04:07.370031    5687 client.go:168] LocalClient.Create starting
	I0806 01:04:07.370097    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:07.370128    5687 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:07.370136    5687 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:07.370173    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:07.370195    5687 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:07.370204    5687 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:07.370613    5687 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:07.523435    5687 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:07.620301    5687 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:07.620308    5687 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:07.620508    5687 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2
	I0806 01:04:07.629994    5687 main.go:141] libmachine: STDOUT: 
	I0806 01:04:07.630011    5687 main.go:141] libmachine: STDERR: 
	I0806 01:04:07.630063    5687 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2 +20000M
	I0806 01:04:07.638298    5687 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:07.638317    5687 main.go:141] libmachine: STDERR: 
	I0806 01:04:07.638336    5687 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2
	I0806 01:04:07.638341    5687 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:07.638353    5687 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:07.638382    5687 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:bf:72:6e:aa:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2
	I0806 01:04:07.640111    5687 main.go:141] libmachine: STDOUT: 
	I0806 01:04:07.640126    5687 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:07.640145    5687 client.go:171] duration metric: took 270.110708ms to LocalClient.Create
	I0806 01:04:09.642316    5687 start.go:128] duration metric: took 2.297015958s to createHost
	I0806 01:04:09.642476    5687 start.go:83] releasing machines lock for "kubenet-187000", held for 2.297175s
	W0806 01:04:09.642534    5687 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:09.654414    5687 out.go:177] * Deleting "kubenet-187000" in qemu2 ...
	W0806 01:04:09.679156    5687 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:09.679190    5687 start.go:729] Will try again in 5 seconds ...
	I0806 01:04:14.681353    5687 start.go:360] acquireMachinesLock for kubenet-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:14.681619    5687 start.go:364] duration metric: took 195.042µs to acquireMachinesLock for "kubenet-187000"
	I0806 01:04:14.681666    5687 start.go:93] Provisioning new machine with config: &{Name:kubenet-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:14.681811    5687 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:14.691200    5687 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:14.729367    5687 start.go:159] libmachine.API.Create for "kubenet-187000" (driver="qemu2")
	I0806 01:04:14.729410    5687 client.go:168] LocalClient.Create starting
	I0806 01:04:14.729526    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:14.729578    5687 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:14.729595    5687 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:14.729661    5687 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:14.729701    5687 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:14.729712    5687 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:14.730163    5687 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:14.893436    5687 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:14.974195    5687 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:14.974218    5687 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:14.974458    5687 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2
	I0806 01:04:14.984345    5687 main.go:141] libmachine: STDOUT: 
	I0806 01:04:14.984363    5687 main.go:141] libmachine: STDERR: 
	I0806 01:04:14.984414    5687 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2 +20000M
	I0806 01:04:14.992735    5687 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:14.992751    5687 main.go:141] libmachine: STDERR: 
	I0806 01:04:14.992765    5687 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2
	I0806 01:04:14.992769    5687 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:14.992781    5687 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:14.992820    5687 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:67:8e:cf:98:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/kubenet-187000/disk.qcow2
	I0806 01:04:14.994520    5687 main.go:141] libmachine: STDOUT: 
	I0806 01:04:14.994535    5687 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:14.994548    5687 client.go:171] duration metric: took 265.136ms to LocalClient.Create
	I0806 01:04:16.996618    5687 start.go:128] duration metric: took 2.314808166s to createHost
	I0806 01:04:16.996670    5687 start.go:83] releasing machines lock for "kubenet-187000", held for 2.315050042s
	W0806 01:04:16.996818    5687 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:17.008177    5687 out.go:177] 
	W0806 01:04:17.012131    5687 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:04:17.012147    5687 out.go:239] * 
	* 
	W0806 01:04:17.012856    5687 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:04:17.025153    5687 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.792641417s)

                                                
                                                
-- stdout --
	* [custom-flannel-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-187000" primary control-plane node in "custom-flannel-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:04:19.201579    5799 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:04:19.201750    5799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:19.201753    5799 out.go:304] Setting ErrFile to fd 2...
	I0806 01:04:19.201755    5799 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:19.201904    5799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:04:19.203064    5799 out.go:298] Setting JSON to false
	I0806 01:04:19.219815    5799 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3827,"bootTime":1722927632,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:04:19.219885    5799 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:04:19.226045    5799 out.go:177] * [custom-flannel-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:04:19.233008    5799 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:04:19.233068    5799 notify.go:220] Checking for updates...
	I0806 01:04:19.238475    5799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:04:19.241916    5799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:04:19.244973    5799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:04:19.247983    5799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:04:19.251071    5799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:04:19.254408    5799 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:04:19.254470    5799 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:04:19.254515    5799 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:04:19.259054    5799 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:04:19.266000    5799 start.go:297] selected driver: qemu2
	I0806 01:04:19.266005    5799 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:04:19.266010    5799 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:04:19.268197    5799 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:04:19.270989    5799 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:04:19.273989    5799 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:04:19.274016    5799 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0806 01:04:19.274025    5799 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0806 01:04:19.274058    5799 start.go:340] cluster config:
	{Name:custom-flannel-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:04:19.277617    5799 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:04:19.284841    5799 out.go:177] * Starting "custom-flannel-187000" primary control-plane node in "custom-flannel-187000" cluster
	I0806 01:04:19.289014    5799 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:04:19.289037    5799 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:04:19.289049    5799 cache.go:56] Caching tarball of preloaded images
	I0806 01:04:19.289118    5799 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:04:19.289123    5799 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:04:19.289182    5799 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/custom-flannel-187000/config.json ...
	I0806 01:04:19.289192    5799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/custom-flannel-187000/config.json: {Name:mk56669d74636008a1a744f721779462a2378548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:04:19.289394    5799 start.go:360] acquireMachinesLock for custom-flannel-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:19.289425    5799 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "custom-flannel-187000"
	I0806 01:04:19.289435    5799 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:19.289458    5799 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:19.297931    5799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:19.313093    5799 start.go:159] libmachine.API.Create for "custom-flannel-187000" (driver="qemu2")
	I0806 01:04:19.313119    5799 client.go:168] LocalClient.Create starting
	I0806 01:04:19.313177    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:19.313207    5799 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:19.313221    5799 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:19.313260    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:19.313283    5799 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:19.313290    5799 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:19.313648    5799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:19.466866    5799 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:19.631559    5799 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:19.631571    5799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:19.631787    5799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2
	I0806 01:04:19.641456    5799 main.go:141] libmachine: STDOUT: 
	I0806 01:04:19.641478    5799 main.go:141] libmachine: STDERR: 
	I0806 01:04:19.641536    5799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2 +20000M
	I0806 01:04:19.649637    5799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:19.649651    5799 main.go:141] libmachine: STDERR: 
	I0806 01:04:19.649660    5799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2
	I0806 01:04:19.649665    5799 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:19.649681    5799 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:19.649705    5799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:23:48:47:0c:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2
	I0806 01:04:19.651311    5799 main.go:141] libmachine: STDOUT: 
	I0806 01:04:19.651327    5799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:19.651344    5799 client.go:171] duration metric: took 338.223125ms to LocalClient.Create
	I0806 01:04:21.653507    5799 start.go:128] duration metric: took 2.364040167s to createHost
	I0806 01:04:21.653568    5799 start.go:83] releasing machines lock for "custom-flannel-187000", held for 2.364151292s
	W0806 01:04:21.653647    5799 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:21.667481    5799 out.go:177] * Deleting "custom-flannel-187000" in qemu2 ...
	W0806 01:04:21.692575    5799 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:21.692617    5799 start.go:729] Will try again in 5 seconds ...
	I0806 01:04:26.694714    5799 start.go:360] acquireMachinesLock for custom-flannel-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:26.694907    5799 start.go:364] duration metric: took 148.708µs to acquireMachinesLock for "custom-flannel-187000"
	I0806 01:04:26.694963    5799 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:26.695043    5799 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:26.703361    5799 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:26.720176    5799 start.go:159] libmachine.API.Create for "custom-flannel-187000" (driver="qemu2")
	I0806 01:04:26.720201    5799 client.go:168] LocalClient.Create starting
	I0806 01:04:26.720265    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:26.720302    5799 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:26.720311    5799 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:26.720353    5799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:26.720376    5799 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:26.720382    5799 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:26.720684    5799 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:26.875579    5799 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:26.901470    5799 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:26.901475    5799 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:26.901662    5799 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2
	I0806 01:04:26.910839    5799 main.go:141] libmachine: STDOUT: 
	I0806 01:04:26.910857    5799 main.go:141] libmachine: STDERR: 
	I0806 01:04:26.910917    5799 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2 +20000M
	I0806 01:04:26.918705    5799 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:26.918725    5799 main.go:141] libmachine: STDERR: 
	I0806 01:04:26.918735    5799 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2
	I0806 01:04:26.918739    5799 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:26.918750    5799 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:26.918776    5799 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b6:79:4f:78:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/custom-flannel-187000/disk.qcow2
	I0806 01:04:26.920420    5799 main.go:141] libmachine: STDOUT: 
	I0806 01:04:26.920435    5799 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:26.920446    5799 client.go:171] duration metric: took 200.24375ms to LocalClient.Create
	I0806 01:04:28.922641    5799 start.go:128] duration metric: took 2.227584459s to createHost
	I0806 01:04:28.922719    5799 start.go:83] releasing machines lock for "custom-flannel-187000", held for 2.227815875s
	W0806 01:04:28.923111    5799 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:28.936716    5799 out.go:177] 
	W0806 01:04:28.940803    5799 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:04:28.940828    5799 out.go:239] * 
	* 
	W0806 01:04:28.943479    5799 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:04:28.955613    5799 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.763684625s)

                                                
                                                
-- stdout --
	* [calico-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-187000" primary control-plane node in "calico-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:04:31.315035    5918 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:04:31.315176    5918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:31.315179    5918 out.go:304] Setting ErrFile to fd 2...
	I0806 01:04:31.315181    5918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:31.315300    5918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:04:31.316453    5918 out.go:298] Setting JSON to false
	I0806 01:04:31.332763    5918 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3839,"bootTime":1722927632,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:04:31.332847    5918 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:04:31.339576    5918 out.go:177] * [calico-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:04:31.346515    5918 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:04:31.346628    5918 notify.go:220] Checking for updates...
	I0806 01:04:31.353555    5918 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:04:31.356502    5918 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:04:31.359597    5918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:04:31.362531    5918 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:04:31.365516    5918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:04:31.368970    5918 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:04:31.369033    5918 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:04:31.369079    5918 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:04:31.373595    5918 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:04:31.380548    5918 start.go:297] selected driver: qemu2
	I0806 01:04:31.380556    5918 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:04:31.380563    5918 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:04:31.382814    5918 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:04:31.385588    5918 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:04:31.388654    5918 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:04:31.388677    5918 cni.go:84] Creating CNI manager for "calico"
	I0806 01:04:31.388682    5918 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0806 01:04:31.388736    5918 start.go:340] cluster config:
	{Name:calico-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:04:31.392335    5918 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:04:31.399558    5918 out.go:177] * Starting "calico-187000" primary control-plane node in "calico-187000" cluster
	I0806 01:04:31.403559    5918 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:04:31.403575    5918 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:04:31.403585    5918 cache.go:56] Caching tarball of preloaded images
	I0806 01:04:31.403644    5918 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:04:31.403649    5918 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:04:31.403717    5918 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/calico-187000/config.json ...
	I0806 01:04:31.403729    5918 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/calico-187000/config.json: {Name:mk76d057d825f347f52b3eb361a6bfa192bd5988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:04:31.404121    5918 start.go:360] acquireMachinesLock for calico-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:31.404152    5918 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "calico-187000"
	I0806 01:04:31.404161    5918 start.go:93] Provisioning new machine with config: &{Name:calico-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:31.404187    5918 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:31.412516    5918 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:31.428854    5918 start.go:159] libmachine.API.Create for "calico-187000" (driver="qemu2")
	I0806 01:04:31.428882    5918 client.go:168] LocalClient.Create starting
	I0806 01:04:31.428948    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:31.428978    5918 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:31.428986    5918 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:31.429029    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:31.429052    5918 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:31.429056    5918 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:31.429582    5918 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:31.581643    5918 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:31.632876    5918 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:31.632882    5918 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:31.633058    5918 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2
	I0806 01:04:31.642796    5918 main.go:141] libmachine: STDOUT: 
	I0806 01:04:31.642817    5918 main.go:141] libmachine: STDERR: 
	I0806 01:04:31.642873    5918 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2 +20000M
	I0806 01:04:31.650950    5918 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:31.650964    5918 main.go:141] libmachine: STDERR: 
	I0806 01:04:31.650981    5918 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2
	I0806 01:04:31.650988    5918 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:31.651005    5918 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:31.651029    5918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:f6:38:e4:5c:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2
	I0806 01:04:31.652715    5918 main.go:141] libmachine: STDOUT: 
	I0806 01:04:31.652731    5918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:31.652748    5918 client.go:171] duration metric: took 223.863584ms to LocalClient.Create
	I0806 01:04:33.654834    5918 start.go:128] duration metric: took 2.25065s to createHost
	I0806 01:04:33.654866    5918 start.go:83] releasing machines lock for "calico-187000", held for 2.250723584s
	W0806 01:04:33.654919    5918 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:33.671074    5918 out.go:177] * Deleting "calico-187000" in qemu2 ...
	W0806 01:04:33.689159    5918 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:33.689171    5918 start.go:729] Will try again in 5 seconds ...
	I0806 01:04:38.691353    5918 start.go:360] acquireMachinesLock for calico-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:38.691954    5918 start.go:364] duration metric: took 469.334µs to acquireMachinesLock for "calico-187000"
	I0806 01:04:38.692122    5918 start.go:93] Provisioning new machine with config: &{Name:calico-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:38.692325    5918 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:38.700976    5918 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:38.750529    5918 start.go:159] libmachine.API.Create for "calico-187000" (driver="qemu2")
	I0806 01:04:38.750590    5918 client.go:168] LocalClient.Create starting
	I0806 01:04:38.750702    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:38.750764    5918 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:38.750781    5918 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:38.750842    5918 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:38.750894    5918 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:38.750904    5918 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:38.751558    5918 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:38.914493    5918 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:38.988391    5918 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:38.988401    5918 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:38.988599    5918 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2
	I0806 01:04:38.997998    5918 main.go:141] libmachine: STDOUT: 
	I0806 01:04:38.998015    5918 main.go:141] libmachine: STDERR: 
	I0806 01:04:38.998074    5918 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2 +20000M
	I0806 01:04:39.006130    5918 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:39.006143    5918 main.go:141] libmachine: STDERR: 
	I0806 01:04:39.006153    5918 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2
	I0806 01:04:39.006158    5918 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:39.006173    5918 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:39.006202    5918 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:a1:d6:a0:4f:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/calico-187000/disk.qcow2
	I0806 01:04:39.007814    5918 main.go:141] libmachine: STDOUT: 
	I0806 01:04:39.007832    5918 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:39.007844    5918 client.go:171] duration metric: took 257.249125ms to LocalClient.Create
	I0806 01:04:41.010034    5918 start.go:128] duration metric: took 2.31768575s to createHost
	I0806 01:04:41.010108    5918 start.go:83] releasing machines lock for "calico-187000", held for 2.318124917s
	W0806 01:04:41.010583    5918 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:41.022191    5918 out.go:177] 
	W0806 01:04:41.026246    5918 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:04:41.026314    5918 out.go:239] * 
	* 
	W0806 01:04:41.028738    5918 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:04:41.042050    5918 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-187000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.789670291s)

                                                
                                                
-- stdout --
	* [false-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-187000" primary control-plane node in "false-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:04:43.412638    6040 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:04:43.412758    6040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:43.412761    6040 out.go:304] Setting ErrFile to fd 2...
	I0806 01:04:43.412763    6040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:43.412906    6040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:04:43.414060    6040 out.go:298] Setting JSON to false
	I0806 01:04:43.430732    6040 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3851,"bootTime":1722927632,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:04:43.430803    6040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:04:43.434737    6040 out.go:177] * [false-187000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:04:43.441790    6040 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:04:43.441813    6040 notify.go:220] Checking for updates...
	I0806 01:04:43.448803    6040 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:04:43.451815    6040 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:04:43.454771    6040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:04:43.457792    6040 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:04:43.460814    6040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:04:43.462519    6040 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:04:43.462589    6040 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:04:43.462643    6040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:04:43.466735    6040 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:04:43.473549    6040 start.go:297] selected driver: qemu2
	I0806 01:04:43.473555    6040 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:04:43.473562    6040 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:04:43.475850    6040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:04:43.478762    6040 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:04:43.481854    6040 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:04:43.481883    6040 cni.go:84] Creating CNI manager for "false"
	I0806 01:04:43.481905    6040 start.go:340] cluster config:
	{Name:false-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:04:43.485378    6040 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:04:43.492717    6040 out.go:177] * Starting "false-187000" primary control-plane node in "false-187000" cluster
	I0806 01:04:43.496783    6040 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:04:43.496799    6040 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:04:43.496806    6040 cache.go:56] Caching tarball of preloaded images
	I0806 01:04:43.496857    6040 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:04:43.496862    6040 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:04:43.496921    6040 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/false-187000/config.json ...
	I0806 01:04:43.496936    6040 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/false-187000/config.json: {Name:mkeca73c946876c26f277f451a4701569e86d2c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:04:43.497225    6040 start.go:360] acquireMachinesLock for false-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:43.497254    6040 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "false-187000"
	I0806 01:04:43.497263    6040 start.go:93] Provisioning new machine with config: &{Name:false-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:43.497285    6040 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:43.505717    6040 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:43.520301    6040 start.go:159] libmachine.API.Create for "false-187000" (driver="qemu2")
	I0806 01:04:43.520323    6040 client.go:168] LocalClient.Create starting
	I0806 01:04:43.520381    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:43.520412    6040 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:43.520422    6040 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:43.520456    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:43.520482    6040 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:43.520495    6040 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:43.520843    6040 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:43.674409    6040 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:43.738461    6040 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:43.738470    6040 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:43.738665    6040 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2
	I0806 01:04:43.747818    6040 main.go:141] libmachine: STDOUT: 
	I0806 01:04:43.747840    6040 main.go:141] libmachine: STDERR: 
	I0806 01:04:43.747893    6040 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2 +20000M
	I0806 01:04:43.755823    6040 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:43.755837    6040 main.go:141] libmachine: STDERR: 
	I0806 01:04:43.755858    6040 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2
	I0806 01:04:43.755864    6040 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:43.755877    6040 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:43.755904    6040 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:fb:b6:4e:60:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2
	I0806 01:04:43.757529    6040 main.go:141] libmachine: STDOUT: 
	I0806 01:04:43.757544    6040 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:43.757560    6040 client.go:171] duration metric: took 237.236708ms to LocalClient.Create
	I0806 01:04:45.759733    6040 start.go:128] duration metric: took 2.262436625s to createHost
	I0806 01:04:45.759803    6040 start.go:83] releasing machines lock for "false-187000", held for 2.262556791s
	W0806 01:04:45.759953    6040 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:45.765688    6040 out.go:177] * Deleting "false-187000" in qemu2 ...
	W0806 01:04:45.790974    6040 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:45.791000    6040 start.go:729] Will try again in 5 seconds ...
	I0806 01:04:50.793143    6040 start.go:360] acquireMachinesLock for false-187000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:50.793707    6040 start.go:364] duration metric: took 476.292µs to acquireMachinesLock for "false-187000"
	I0806 01:04:50.793850    6040 start.go:93] Provisioning new machine with config: &{Name:false-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:50.794114    6040 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:50.801060    6040 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:04:50.844491    6040 start.go:159] libmachine.API.Create for "false-187000" (driver="qemu2")
	I0806 01:04:50.844553    6040 client.go:168] LocalClient.Create starting
	I0806 01:04:50.844678    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:50.844742    6040 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:50.844757    6040 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:50.844829    6040 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:50.844875    6040 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:50.844897    6040 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:50.845433    6040 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:51.005283    6040 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:51.105758    6040 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:51.105770    6040 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:51.105964    6040 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2
	I0806 01:04:51.115554    6040 main.go:141] libmachine: STDOUT: 
	I0806 01:04:51.115570    6040 main.go:141] libmachine: STDERR: 
	I0806 01:04:51.115635    6040 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2 +20000M
	I0806 01:04:51.123842    6040 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:51.123857    6040 main.go:141] libmachine: STDERR: 
	I0806 01:04:51.123868    6040 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2
	I0806 01:04:51.123873    6040 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:51.123883    6040 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:51.123913    6040 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:06:4d:9d:49:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/false-187000/disk.qcow2
	I0806 01:04:51.125669    6040 main.go:141] libmachine: STDOUT: 
	I0806 01:04:51.125683    6040 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:51.125696    6040 client.go:171] duration metric: took 281.138875ms to LocalClient.Create
	I0806 01:04:53.127907    6040 start.go:128] duration metric: took 2.333767375s to createHost
	I0806 01:04:53.127999    6040 start.go:83] releasing machines lock for "false-187000", held for 2.334270209s
	W0806 01:04:53.128477    6040 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:53.142079    6040 out.go:177] 
	W0806 01:04:53.145962    6040 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:04:53.146044    6040 out.go:239] * 
	* 
	W0806 01:04:53.148889    6040 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:04:53.165088    6040 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-295000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-295000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.782168625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-295000" primary control-plane node in "old-k8s-version-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:04:55.339985    6154 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:04:55.340127    6154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:55.340131    6154 out.go:304] Setting ErrFile to fd 2...
	I0806 01:04:55.340133    6154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:04:55.340266    6154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:04:55.341323    6154 out.go:298] Setting JSON to false
	I0806 01:04:55.357488    6154 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3863,"bootTime":1722927632,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:04:55.357552    6154 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:04:55.364113    6154 out.go:177] * [old-k8s-version-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:04:55.371279    6154 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:04:55.371323    6154 notify.go:220] Checking for updates...
	I0806 01:04:55.377213    6154 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:04:55.380259    6154 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:04:55.381772    6154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:04:55.384202    6154 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:04:55.387262    6154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:04:55.390625    6154 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:04:55.390692    6154 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:04:55.390736    6154 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:04:55.395157    6154 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:04:55.402252    6154 start.go:297] selected driver: qemu2
	I0806 01:04:55.402259    6154 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:04:55.402268    6154 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:04:55.404457    6154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:04:55.407264    6154 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:04:55.410323    6154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:04:55.410341    6154 cni.go:84] Creating CNI manager for ""
	I0806 01:04:55.410347    6154 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 01:04:55.410375    6154 start.go:340] cluster config:
	{Name:old-k8s-version-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:04:55.414092    6154 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:04:55.421127    6154 out.go:177] * Starting "old-k8s-version-295000" primary control-plane node in "old-k8s-version-295000" cluster
	I0806 01:04:55.425167    6154 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 01:04:55.425181    6154 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 01:04:55.425187    6154 cache.go:56] Caching tarball of preloaded images
	I0806 01:04:55.425240    6154 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:04:55.425245    6154 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0806 01:04:55.425299    6154 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/old-k8s-version-295000/config.json ...
	I0806 01:04:55.425311    6154 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/old-k8s-version-295000/config.json: {Name:mk2121b85793b7e138c8c732528ac3a8f830ba03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:04:55.425514    6154 start.go:360] acquireMachinesLock for old-k8s-version-295000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:04:55.425545    6154 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "old-k8s-version-295000"
	I0806 01:04:55.425555    6154 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:04:55.425583    6154 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:04:55.432236    6154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:04:55.447411    6154 start.go:159] libmachine.API.Create for "old-k8s-version-295000" (driver="qemu2")
	I0806 01:04:55.447441    6154 client.go:168] LocalClient.Create starting
	I0806 01:04:55.447497    6154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:04:55.447532    6154 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:55.447543    6154 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:55.447581    6154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:04:55.447603    6154 main.go:141] libmachine: Decoding PEM data...
	I0806 01:04:55.447610    6154 main.go:141] libmachine: Parsing certificate...
	I0806 01:04:55.447952    6154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:04:55.601138    6154 main.go:141] libmachine: Creating SSH key...
	I0806 01:04:55.749618    6154 main.go:141] libmachine: Creating Disk image...
	I0806 01:04:55.749632    6154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:04:55.749853    6154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:04:55.760073    6154 main.go:141] libmachine: STDOUT: 
	I0806 01:04:55.760096    6154 main.go:141] libmachine: STDERR: 
	I0806 01:04:55.760156    6154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2 +20000M
	I0806 01:04:55.768931    6154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:04:55.768949    6154 main.go:141] libmachine: STDERR: 
	I0806 01:04:55.768976    6154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:04:55.768981    6154 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:04:55.768992    6154 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:04:55.769026    6154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:33:3c:49:22:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:04:55.770802    6154 main.go:141] libmachine: STDOUT: 
	I0806 01:04:55.770818    6154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:04:55.770835    6154 client.go:171] duration metric: took 323.393416ms to LocalClient.Create
	I0806 01:04:57.773226    6154 start.go:128] duration metric: took 2.347598583s to createHost
	I0806 01:04:57.773401    6154 start.go:83] releasing machines lock for "old-k8s-version-295000", held for 2.347790042s
	W0806 01:04:57.773460    6154 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:57.781870    6154 out.go:177] * Deleting "old-k8s-version-295000" in qemu2 ...
	W0806 01:04:57.809717    6154 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:04:57.809746    6154 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:02.811910    6154 start.go:360] acquireMachinesLock for old-k8s-version-295000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:02.812105    6154 start.go:364] duration metric: took 162.875µs to acquireMachinesLock for "old-k8s-version-295000"
	I0806 01:05:02.812131    6154 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:02.812199    6154 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:02.821458    6154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:02.837411    6154 start.go:159] libmachine.API.Create for "old-k8s-version-295000" (driver="qemu2")
	I0806 01:05:02.837456    6154 client.go:168] LocalClient.Create starting
	I0806 01:05:02.837530    6154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:02.837568    6154 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:02.837579    6154 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:02.837617    6154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:02.837640    6154 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:02.837646    6154 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:02.837936    6154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:02.994432    6154 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:03.034258    6154 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:03.034264    6154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:03.034458    6154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:05:03.044157    6154 main.go:141] libmachine: STDOUT: 
	I0806 01:05:03.044179    6154 main.go:141] libmachine: STDERR: 
	I0806 01:05:03.044232    6154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2 +20000M
	I0806 01:05:03.052202    6154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:03.052222    6154 main.go:141] libmachine: STDERR: 
	I0806 01:05:03.052234    6154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:05:03.052239    6154 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:03.052250    6154 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:03.052283    6154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:33:ba:95:d5:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:05:03.054072    6154 main.go:141] libmachine: STDOUT: 
	I0806 01:05:03.054096    6154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:03.054108    6154 client.go:171] duration metric: took 216.649542ms to LocalClient.Create
	I0806 01:05:05.056339    6154 start.go:128] duration metric: took 2.244133542s to createHost
	I0806 01:05:05.056410    6154 start.go:83] releasing machines lock for "old-k8s-version-295000", held for 2.244310959s
	W0806 01:05:05.056769    6154 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:05.068415    6154 out.go:177] 
	W0806 01:05:05.072357    6154 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:05.072373    6154 out.go:239] * 
	* 
	W0806 01:05:05.073757    6154 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:05.085401    6154 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-295000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (41.974959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-295000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-295000 create -f testdata/busybox.yaml: exit status 1 (27.865292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-295000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-295000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (28.709709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (28.6495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-295000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-295000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-295000 describe deploy/metrics-server -n kube-system: exit status 1 (26.685167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-295000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-295000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (29.187625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-295000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-295000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.191373542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-295000" primary control-plane node in "old-k8s-version-295000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-295000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-295000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:08.734617    6208 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:08.734750    6208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:08.734755    6208 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:08.734758    6208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:08.734902    6208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:08.735946    6208 out.go:298] Setting JSON to false
	I0806 01:05:08.752383    6208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3876,"bootTime":1722927632,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:08.752458    6208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:08.757507    6208 out.go:177] * [old-k8s-version-295000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:08.765454    6208 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:08.765520    6208 notify.go:220] Checking for updates...
	I0806 01:05:08.772525    6208 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:08.775546    6208 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:08.778512    6208 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:08.781538    6208 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:08.784495    6208 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:08.787747    6208 config.go:182] Loaded profile config "old-k8s-version-295000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0806 01:05:08.790451    6208 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0806 01:05:08.791637    6208 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:08.796465    6208 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 01:05:08.807390    6208 start.go:297] selected driver: qemu2
	I0806 01:05:08.807395    6208 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:08.807445    6208 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:08.809943    6208 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:05:08.809965    6208 cni.go:84] Creating CNI manager for ""
	I0806 01:05:08.809972    6208 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 01:05:08.810000    6208 start.go:340] cluster config:
	{Name:old-k8s-version-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-295000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:08.813682    6208 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:08.820454    6208 out.go:177] * Starting "old-k8s-version-295000" primary control-plane node in "old-k8s-version-295000" cluster
	I0806 01:05:08.824422    6208 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 01:05:08.824440    6208 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 01:05:08.824449    6208 cache.go:56] Caching tarball of preloaded images
	I0806 01:05:08.824512    6208 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:05:08.824518    6208 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0806 01:05:08.824579    6208 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/old-k8s-version-295000/config.json ...
	I0806 01:05:08.825005    6208 start.go:360] acquireMachinesLock for old-k8s-version-295000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:08.825044    6208 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "old-k8s-version-295000"
	I0806 01:05:08.825053    6208 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:08.825062    6208 fix.go:54] fixHost starting: 
	I0806 01:05:08.825179    6208 fix.go:112] recreateIfNeeded on old-k8s-version-295000: state=Stopped err=<nil>
	W0806 01:05:08.825188    6208 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:08.829284    6208 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-295000" ...
	I0806 01:05:08.837436    6208 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:08.837469    6208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:33:ba:95:d5:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:05:08.839323    6208 main.go:141] libmachine: STDOUT: 
	I0806 01:05:08.839338    6208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:08.839364    6208 fix.go:56] duration metric: took 14.303208ms for fixHost
	I0806 01:05:08.839369    6208 start.go:83] releasing machines lock for "old-k8s-version-295000", held for 14.319916ms
	W0806 01:05:08.839374    6208 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:08.839411    6208 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:08.839415    6208 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:13.841652    6208 start.go:360] acquireMachinesLock for old-k8s-version-295000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:13.842110    6208 start.go:364] duration metric: took 368µs to acquireMachinesLock for "old-k8s-version-295000"
	I0806 01:05:13.842189    6208 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:13.842212    6208 fix.go:54] fixHost starting: 
	I0806 01:05:13.842934    6208 fix.go:112] recreateIfNeeded on old-k8s-version-295000: state=Stopped err=<nil>
	W0806 01:05:13.842962    6208 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:13.852599    6208 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-295000" ...
	I0806 01:05:13.856605    6208 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:13.856973    6208 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:33:ba:95:d5:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/old-k8s-version-295000/disk.qcow2
	I0806 01:05:13.864891    6208 main.go:141] libmachine: STDOUT: 
	I0806 01:05:13.864947    6208 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:13.865044    6208 fix.go:56] duration metric: took 22.837166ms for fixHost
	I0806 01:05:13.865064    6208 start.go:83] releasing machines lock for "old-k8s-version-295000", held for 22.932208ms
	W0806 01:05:13.865269    6208 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-295000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-295000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:13.872585    6208 out.go:177] 
	W0806 01:05:13.876679    6208 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:13.876705    6208 out.go:239] * 
	* 
	W0806 01:05:13.878244    6208 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:13.885494    6208 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-295000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (59.104833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-295000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (31.44375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-295000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-295000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-295000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.396584ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-295000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-295000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (28.373208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-295000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (29.315667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-295000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-295000 --alsologtostderr -v=1: exit status 83 (41.101459ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-295000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-295000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:14.146145    6231 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:14.147150    6231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:14.147154    6231 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:14.147157    6231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:14.147332    6231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:14.147539    6231 out.go:298] Setting JSON to false
	I0806 01:05:14.147546    6231 mustload.go:65] Loading cluster: old-k8s-version-295000
	I0806 01:05:14.147733    6231 config.go:182] Loaded profile config "old-k8s-version-295000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0806 01:05:14.151414    6231 out.go:177] * The control-plane node old-k8s-version-295000 host is not running: state=Stopped
	I0806 01:05:14.154446    6231 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-295000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-295000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (29.127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (28.587916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-295000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.917440541s)

                                                
                                                
-- stdout --
	* [no-preload-244000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-244000" primary control-plane node in "no-preload-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:14.475462    6248 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:14.475767    6248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:14.475770    6248 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:14.475773    6248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:14.475920    6248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:14.477263    6248 out.go:298] Setting JSON to false
	I0806 01:05:14.493850    6248 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3882,"bootTime":1722927632,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:14.493927    6248 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:14.498294    6248 out.go:177] * [no-preload-244000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:14.501261    6248 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:14.501301    6248 notify.go:220] Checking for updates...
	I0806 01:05:14.512186    6248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:14.515218    6248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:14.518239    6248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:14.521196    6248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:14.524211    6248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:14.527492    6248 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:14.527557    6248 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0806 01:05:14.527602    6248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:14.531153    6248 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:05:14.538242    6248 start.go:297] selected driver: qemu2
	I0806 01:05:14.538248    6248 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:05:14.538253    6248 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:14.540359    6248 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:05:14.543191    6248 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:05:14.546241    6248 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:05:14.546269    6248 cni.go:84] Creating CNI manager for ""
	I0806 01:05:14.546274    6248 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:05:14.546280    6248 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:05:14.546302    6248 start.go:340] cluster config:
	{Name:no-preload-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:14.549842    6248 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.555219    6248 out.go:177] * Starting "no-preload-244000" primary control-plane node in "no-preload-244000" cluster
	I0806 01:05:14.559227    6248 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 01:05:14.559291    6248 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/no-preload-244000/config.json ...
	I0806 01:05:14.559306    6248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/no-preload-244000/config.json: {Name:mk1177ba9e6f619d6bbb2e026126c160c8f55f57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:05:14.559346    6248 cache.go:107] acquiring lock: {Name:mk092792f1d077f24b78422b7c0bdf32a6e62d44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559405    6248 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0806 01:05:14.559410    6248 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 67.292µs
	I0806 01:05:14.559417    6248 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0806 01:05:14.559422    6248 cache.go:107] acquiring lock: {Name:mkd45052e96cea3a7a28fb94104e84cc7c60dad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559397    6248 cache.go:107] acquiring lock: {Name:mkbb7ffc8e63b3bf392a60ee12eab6e1a575783e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559497    6248 cache.go:107] acquiring lock: {Name:mka2964e4b801eeff33ccb427f1615e394380ab6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559502    6248 cache.go:107] acquiring lock: {Name:mk5e5ffd53f91bf8eca318cb95932b7e942e574e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559509    6248 cache.go:107] acquiring lock: {Name:mkd522f8e705882af5ce37230829bcca5fc85f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559547    6248 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 01:05:14.559561    6248 start.go:360] acquireMachinesLock for no-preload-244000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:14.559554    6248 cache.go:107] acquiring lock: {Name:mk7267ed21d6d0f150e3b0426a4c975cf8dc90b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559613    6248 start.go:364] duration metric: took 46.833µs to acquireMachinesLock for "no-preload-244000"
	I0806 01:05:14.559424    6248 cache.go:107] acquiring lock: {Name:mkcd483793e7a9182b4764db195eeef1e1382d53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:14.559714    6248 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0806 01:05:14.559735    6248 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 01:05:14.559751    6248 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 01:05:14.559783    6248 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 01:05:14.559816    6248 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 01:05:14.559626    6248 start.go:93] Provisioning new machine with config: &{Name:no-preload-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:14.559826    6248 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:14.560007    6248 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0806 01:05:14.567162    6248 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:14.570324    6248 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0806 01:05:14.570908    6248 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0806 01:05:14.571043    6248 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0806 01:05:14.571100    6248 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0806 01:05:14.572513    6248 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0806 01:05:14.572588    6248 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0806 01:05:14.572608    6248 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0806 01:05:14.582865    6248 start.go:159] libmachine.API.Create for "no-preload-244000" (driver="qemu2")
	I0806 01:05:14.582885    6248 client.go:168] LocalClient.Create starting
	I0806 01:05:14.582998    6248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:14.583030    6248 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:14.583041    6248 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:14.583086    6248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:14.583109    6248 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:14.583116    6248 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:14.583593    6248 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:14.745942    6248 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:14.927914    6248 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:14.927933    6248 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:14.928140    6248 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:14.937941    6248 main.go:141] libmachine: STDOUT: 
	I0806 01:05:14.937960    6248 main.go:141] libmachine: STDERR: 
	I0806 01:05:14.938011    6248 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2 +20000M
	I0806 01:05:14.939144    6248 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0806 01:05:14.946554    6248 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0806 01:05:14.946774    6248 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:14.946785    6248 main.go:141] libmachine: STDERR: 
	I0806 01:05:14.946799    6248 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:14.946803    6248 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:14.946817    6248 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:14.946840    6248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:ab:57:4d:2b:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:14.948808    6248 main.go:141] libmachine: STDOUT: 
	I0806 01:05:14.948833    6248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:14.948851    6248 client.go:171] duration metric: took 365.964709ms to LocalClient.Create
	I0806 01:05:14.981192    6248 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0806 01:05:14.997611    6248 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0806 01:05:15.001545    6248 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0806 01:05:15.034420    6248 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0806 01:05:15.054450    6248 cache.go:162] opening:  /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0806 01:05:15.239555    6248 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0806 01:05:15.239568    6248 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 680.200917ms
	I0806 01:05:15.239575    6248 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0806 01:05:16.949050    6248 start.go:128] duration metric: took 2.389224s to createHost
	I0806 01:05:16.949082    6248 start.go:83] releasing machines lock for "no-preload-244000", held for 2.38947725s
	W0806 01:05:16.949135    6248 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:16.960279    6248 out.go:177] * Deleting "no-preload-244000" in qemu2 ...
	W0806 01:05:16.979573    6248 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:16.979595    6248 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:17.591189    6248 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0806 01:05:17.591247    6248 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 3.031803667s
	I0806 01:05:17.591264    6248 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0806 01:05:17.791729    6248 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0806 01:05:17.791765    6248 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.232360792s
	I0806 01:05:17.791785    6248 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0806 01:05:18.604412    6248 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0806 01:05:18.604453    6248 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 4.045028s
	I0806 01:05:18.604475    6248 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0806 01:05:18.991065    6248 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0806 01:05:18.991123    6248 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 4.431783458s
	I0806 01:05:18.991157    6248 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0806 01:05:19.312204    6248 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0806 01:05:19.312288    6248 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 4.75277325s
	I0806 01:05:19.312317    6248 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0806 01:05:21.981334    6248 start.go:360] acquireMachinesLock for no-preload-244000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:21.981879    6248 start.go:364] duration metric: took 451.25µs to acquireMachinesLock for "no-preload-244000"
	I0806 01:05:21.982014    6248 start.go:93] Provisioning new machine with config: &{Name:no-preload-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:21.982202    6248 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:21.993818    6248 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:22.043779    6248 start.go:159] libmachine.API.Create for "no-preload-244000" (driver="qemu2")
	I0806 01:05:22.043824    6248 client.go:168] LocalClient.Create starting
	I0806 01:05:22.043949    6248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:22.044024    6248 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:22.044047    6248 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:22.044116    6248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:22.044161    6248 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:22.044177    6248 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:22.044690    6248 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:22.207609    6248 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:22.299904    6248 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:22.299915    6248 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:22.300112    6248 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:22.309845    6248 main.go:141] libmachine: STDOUT: 
	I0806 01:05:22.309863    6248 main.go:141] libmachine: STDERR: 
	I0806 01:05:22.309919    6248 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2 +20000M
	I0806 01:05:22.318035    6248 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:22.318097    6248 main.go:141] libmachine: STDERR: 
	I0806 01:05:22.318108    6248 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:22.318115    6248 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:22.318128    6248 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:22.318164    6248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:86:77:de:f2:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:22.319914    6248 main.go:141] libmachine: STDOUT: 
	I0806 01:05:22.319929    6248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:22.319943    6248 client.go:171] duration metric: took 276.117083ms to LocalClient.Create
	I0806 01:05:22.748635    6248 cache.go:157] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0806 01:05:22.748726    6248 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.189373375s
	I0806 01:05:22.748741    6248 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0806 01:05:22.748776    6248 cache.go:87] Successfully saved all images to host disk.
	I0806 01:05:24.322277    6248 start.go:128] duration metric: took 2.340027833s to createHost
	I0806 01:05:24.322366    6248 start.go:83] releasing machines lock for "no-preload-244000", held for 2.340470583s
	W0806 01:05:24.322724    6248 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:24.334404    6248 out.go:177] 
	W0806 01:05:24.339399    6248 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:24.339425    6248 out.go:239] * 
	* 
	W0806 01:05:24.342126    6248 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:24.350373    6248 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (63.283375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-244000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-244000 create -f testdata/busybox.yaml: exit status 1 (29.805916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-244000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-244000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (28.989167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (29.089125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-244000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-244000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-244000 describe deploy/metrics-server -n kube-system: exit status 1 (27.621ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-244000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-244000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (29.805459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.192530959s)

                                                
                                                
-- stdout --
	* [no-preload-244000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-244000" primary control-plane node in "no-preload-244000" cluster
	* Restarting existing qemu2 VM for "no-preload-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-244000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:27.779735    6334 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:27.779865    6334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:27.779868    6334 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:27.779878    6334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:27.779993    6334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:27.780892    6334 out.go:298] Setting JSON to false
	I0806 01:05:27.797078    6334 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3895,"bootTime":1722927632,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:27.797144    6334 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:27.802742    6334 out.go:177] * [no-preload-244000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:27.813748    6334 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:27.813787    6334 notify.go:220] Checking for updates...
	I0806 01:05:27.819725    6334 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:27.822683    6334 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:27.825708    6334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:27.828747    6334 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:27.831732    6334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:27.835005    6334 config.go:182] Loaded profile config "no-preload-244000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 01:05:27.835271    6334 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:27.839592    6334 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 01:05:27.846712    6334 start.go:297] selected driver: qemu2
	I0806 01:05:27.846718    6334 start.go:901] validating driver "qemu2" against &{Name:no-preload-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:27.846781    6334 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:27.849077    6334 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:05:27.849103    6334 cni.go:84] Creating CNI manager for ""
	I0806 01:05:27.849111    6334 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:05:27.849130    6334 start.go:340] cluster config:
	{Name:no-preload-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-244000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:27.852579    6334 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.859705    6334 out.go:177] * Starting "no-preload-244000" primary control-plane node in "no-preload-244000" cluster
	I0806 01:05:27.863587    6334 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 01:05:27.863659    6334 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/no-preload-244000/config.json ...
	I0806 01:05:27.863711    6334 cache.go:107] acquiring lock: {Name:mk7267ed21d6d0f150e3b0426a4c975cf8dc90b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863727    6334 cache.go:107] acquiring lock: {Name:mkcd483793e7a9182b4764db195eeef1e1382d53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863777    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0806 01:05:27.863713    6334 cache.go:107] acquiring lock: {Name:mk092792f1d077f24b78422b7c0bdf32a6e62d44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863793    6334 cache.go:107] acquiring lock: {Name:mkbb7ffc8e63b3bf392a60ee12eab6e1a575783e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863803    6334 cache.go:107] acquiring lock: {Name:mkd45052e96cea3a7a28fb94104e84cc7c60dad8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863795    6334 cache.go:107] acquiring lock: {Name:mkd522f8e705882af5ce37230829bcca5fc85f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863847    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0806 01:05:27.863780    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0806 01:05:27.863852    6334 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 49.292µs
	I0806 01:05:27.863858    6334 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0806 01:05:27.863859    6334 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 131.625µs
	I0806 01:05:27.863867    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0806 01:05:27.863865    6334 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0806 01:05:27.863782    6334 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 75.125µs
	I0806 01:05:27.863875    6334 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0806 01:05:27.863826    6334 cache.go:107] acquiring lock: {Name:mk5e5ffd53f91bf8eca318cb95932b7e942e574e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863906    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0806 01:05:27.863831    6334 cache.go:107] acquiring lock: {Name:mka2964e4b801eeff33ccb427f1615e394380ab6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:27.863871    6334 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 159.834µs
	I0806 01:05:27.864040    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0806 01:05:27.864044    6334 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0806 01:05:27.864046    6334 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 215.416µs
	I0806 01:05:27.864050    6334 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0806 01:05:27.863925    6334 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 131.833µs
	I0806 01:05:27.864054    6334 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0806 01:05:27.863955    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0806 01:05:27.864057    6334 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 263.25µs
	I0806 01:05:27.864060    6334 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0806 01:05:27.864079    6334 cache.go:115] /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0806 01:05:27.864087    6334 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 271.375µs
	I0806 01:05:27.864104    6334 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0806 01:05:27.864110    6334 cache.go:87] Successfully saved all images to host disk.
	I0806 01:05:27.864136    6334 start.go:360] acquireMachinesLock for no-preload-244000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:27.864163    6334 start.go:364] duration metric: took 21.291µs to acquireMachinesLock for "no-preload-244000"
	I0806 01:05:27.864170    6334 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:27.864175    6334 fix.go:54] fixHost starting: 
	I0806 01:05:27.864297    6334 fix.go:112] recreateIfNeeded on no-preload-244000: state=Stopped err=<nil>
	W0806 01:05:27.864307    6334 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:27.872492    6334 out.go:177] * Restarting existing qemu2 VM for "no-preload-244000" ...
	I0806 01:05:27.876719    6334 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:27.876758    6334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:86:77:de:f2:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:27.878627    6334 main.go:141] libmachine: STDOUT: 
	I0806 01:05:27.878644    6334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:27.878669    6334 fix.go:56] duration metric: took 14.493333ms for fixHost
	I0806 01:05:27.878674    6334 start.go:83] releasing machines lock for "no-preload-244000", held for 14.507917ms
	W0806 01:05:27.878680    6334 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:27.878714    6334 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:27.878718    6334 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:32.880908    6334 start.go:360] acquireMachinesLock for no-preload-244000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:32.881351    6334 start.go:364] duration metric: took 367.041µs to acquireMachinesLock for "no-preload-244000"
	I0806 01:05:32.881487    6334 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:32.881507    6334 fix.go:54] fixHost starting: 
	I0806 01:05:32.882227    6334 fix.go:112] recreateIfNeeded on no-preload-244000: state=Stopped err=<nil>
	W0806 01:05:32.882254    6334 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:32.897832    6334 out.go:177] * Restarting existing qemu2 VM for "no-preload-244000" ...
	I0806 01:05:32.900769    6334 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:32.900999    6334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:86:77:de:f2:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/no-preload-244000/disk.qcow2
	I0806 01:05:32.910187    6334 main.go:141] libmachine: STDOUT: 
	I0806 01:05:32.910253    6334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:32.910328    6334 fix.go:56] duration metric: took 28.82525ms for fixHost
	I0806 01:05:32.910348    6334 start.go:83] releasing machines lock for "no-preload-244000", held for 28.975417ms
	W0806 01:05:32.910494    6334 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-244000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-244000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:32.918641    6334 out.go:177] 
	W0806 01:05:32.921784    6334 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:32.921820    6334 out.go:239] * 
	* 
	W0806 01:05:32.924641    6334 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:32.932507    6334 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-244000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (68.948625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.788075209s)

                                                
                                                
-- stdout --
	* [embed-certs-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:29.016931    6344 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:29.017074    6344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:29.017077    6344 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:29.017080    6344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:29.017212    6344 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:29.018485    6344 out.go:298] Setting JSON to false
	I0806 01:05:29.034837    6344 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3897,"bootTime":1722927632,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:29.034908    6344 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:29.039515    6344 out.go:177] * [embed-certs-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:29.046478    6344 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:29.046521    6344 notify.go:220] Checking for updates...
	I0806 01:05:29.052530    6344 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:29.055434    6344 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:29.058520    6344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:29.061562    6344 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:29.064543    6344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:29.067825    6344 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:29.067904    6344 config.go:182] Loaded profile config "no-preload-244000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 01:05:29.067975    6344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:29.071524    6344 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:05:29.078457    6344 start.go:297] selected driver: qemu2
	I0806 01:05:29.078464    6344 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:05:29.078471    6344 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:29.080815    6344 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:05:29.084555    6344 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:05:29.087516    6344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:05:29.087564    6344 cni.go:84] Creating CNI manager for ""
	I0806 01:05:29.087574    6344 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:05:29.087579    6344 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:05:29.087621    6344 start.go:340] cluster config:
	{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:29.091541    6344 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:29.099504    6344 out.go:177] * Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	I0806 01:05:29.103463    6344 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:05:29.103479    6344 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:05:29.103489    6344 cache.go:56] Caching tarball of preloaded images
	I0806 01:05:29.103560    6344 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:05:29.103566    6344 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:05:29.103636    6344 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/embed-certs-601000/config.json ...
	I0806 01:05:29.103653    6344 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/embed-certs-601000/config.json: {Name:mk8fab485cbb9a15d095b6ee6970ea2e0b0c8eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:05:29.104039    6344 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:29.104072    6344 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "embed-certs-601000"
	I0806 01:05:29.104082    6344 start.go:93] Provisioning new machine with config: &{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:29.104117    6344 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:29.108526    6344 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:29.126150    6344 start.go:159] libmachine.API.Create for "embed-certs-601000" (driver="qemu2")
	I0806 01:05:29.126174    6344 client.go:168] LocalClient.Create starting
	I0806 01:05:29.126235    6344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:29.126266    6344 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:29.126279    6344 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:29.126323    6344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:29.126352    6344 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:29.126362    6344 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:29.126791    6344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:29.285161    6344 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:29.319833    6344 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:29.319838    6344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:29.320061    6344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:29.329013    6344 main.go:141] libmachine: STDOUT: 
	I0806 01:05:29.329031    6344 main.go:141] libmachine: STDERR: 
	I0806 01:05:29.329067    6344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2 +20000M
	I0806 01:05:29.336741    6344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:29.336760    6344 main.go:141] libmachine: STDERR: 
	I0806 01:05:29.336778    6344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:29.336783    6344 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:29.336795    6344 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:29.336826    6344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:86:b8:48:8b:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:29.338397    6344 main.go:141] libmachine: STDOUT: 
	I0806 01:05:29.338412    6344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:29.338430    6344 client.go:171] duration metric: took 212.252792ms to LocalClient.Create
	I0806 01:05:31.340581    6344 start.go:128] duration metric: took 2.23645725s to createHost
	I0806 01:05:31.340653    6344 start.go:83] releasing machines lock for "embed-certs-601000", held for 2.236586083s
	W0806 01:05:31.340726    6344 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:31.350831    6344 out.go:177] * Deleting "embed-certs-601000" in qemu2 ...
	W0806 01:05:31.382124    6344 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:31.382145    6344 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:36.384447    6344 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:36.384897    6344 start.go:364] duration metric: took 324.625µs to acquireMachinesLock for "embed-certs-601000"
	I0806 01:05:36.385031    6344 start.go:93] Provisioning new machine with config: &{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:36.385315    6344 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:36.390796    6344 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:36.440906    6344 start.go:159] libmachine.API.Create for "embed-certs-601000" (driver="qemu2")
	I0806 01:05:36.440955    6344 client.go:168] LocalClient.Create starting
	I0806 01:05:36.441067    6344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:36.441129    6344 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:36.441146    6344 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:36.441206    6344 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:36.441248    6344 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:36.441259    6344 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:36.441780    6344 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:36.612069    6344 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:36.705938    6344 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:36.705947    6344 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:36.706135    6344 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:36.715035    6344 main.go:141] libmachine: STDOUT: 
	I0806 01:05:36.715055    6344 main.go:141] libmachine: STDERR: 
	I0806 01:05:36.715115    6344 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2 +20000M
	I0806 01:05:36.723056    6344 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:36.723071    6344 main.go:141] libmachine: STDERR: 
	I0806 01:05:36.723079    6344 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:36.723084    6344 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:36.723095    6344 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:36.723136    6344 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:9d:f5:b5:a1:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:36.724642    6344 main.go:141] libmachine: STDOUT: 
	I0806 01:05:36.724659    6344 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:36.724670    6344 client.go:171] duration metric: took 283.711166ms to LocalClient.Create
	I0806 01:05:38.726821    6344 start.go:128] duration metric: took 2.341487291s to createHost
	I0806 01:05:38.726896    6344 start.go:83] releasing machines lock for "embed-certs-601000", held for 2.34198575s
	W0806 01:05:38.727295    6344 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:38.742731    6344 out.go:177] 
	W0806 01:05:38.750920    6344 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:38.750952    6344 out.go:239] * 
	* 
	W0806 01:05:38.753306    6344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:38.762856    6344 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (65.511875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-244000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (32.25475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-244000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-244000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-244000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.702958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-244000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-244000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (29.106625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-244000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (28.45925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-244000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-244000 --alsologtostderr -v=1: exit status 83 (39.219541ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-244000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-244000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:33.200759    6366 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:33.200956    6366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:33.200959    6366 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:33.200962    6366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:33.201090    6366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:33.201307    6366 out.go:298] Setting JSON to false
	I0806 01:05:33.201313    6366 mustload.go:65] Loading cluster: no-preload-244000
	I0806 01:05:33.201499    6366 config.go:182] Loaded profile config "no-preload-244000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 01:05:33.204633    6366 out.go:177] * The control-plane node no-preload-244000 host is not running: state=Stopped
	I0806 01:05:33.207669    6366 out.go:177]   To start a cluster, run: "minikube start -p no-preload-244000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-244000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (28.260042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (28.231833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-244000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-689000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-689000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.828752792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-689000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-689000" primary control-plane node in "default-k8s-diff-port-689000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-689000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:33.625322    6390 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:33.625450    6390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:33.625453    6390 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:33.625455    6390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:33.625574    6390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:33.626621    6390 out.go:298] Setting JSON to false
	I0806 01:05:33.642709    6390 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3901,"bootTime":1722927632,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:33.642789    6390 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:33.646713    6390 out.go:177] * [default-k8s-diff-port-689000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:33.653730    6390 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:33.653836    6390 notify.go:220] Checking for updates...
	I0806 01:05:33.660689    6390 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:33.663697    6390 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:33.666640    6390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:33.669710    6390 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:33.672591    6390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:33.676005    6390 config.go:182] Loaded profile config "embed-certs-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:33.676070    6390 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:33.676116    6390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:33.680628    6390 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:05:33.687691    6390 start.go:297] selected driver: qemu2
	I0806 01:05:33.687697    6390 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:05:33.687706    6390 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:33.689976    6390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:05:33.692656    6390 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:05:33.695715    6390 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:05:33.695752    6390 cni.go:84] Creating CNI manager for ""
	I0806 01:05:33.695759    6390 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:05:33.695764    6390 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:05:33.695792    6390 start.go:340] cluster config:
	{Name:default-k8s-diff-port-689000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-689000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:33.699385    6390 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:33.706727    6390 out.go:177] * Starting "default-k8s-diff-port-689000" primary control-plane node in "default-k8s-diff-port-689000" cluster
	I0806 01:05:33.710726    6390 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:05:33.710742    6390 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:05:33.710753    6390 cache.go:56] Caching tarball of preloaded images
	I0806 01:05:33.710826    6390 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:05:33.710832    6390 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:05:33.710898    6390 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/default-k8s-diff-port-689000/config.json ...
	I0806 01:05:33.710909    6390 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/default-k8s-diff-port-689000/config.json: {Name:mked84135abf10aee9bb69893a954d0a614be364 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:05:33.711130    6390 start.go:360] acquireMachinesLock for default-k8s-diff-port-689000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:33.711165    6390 start.go:364] duration metric: took 27.916µs to acquireMachinesLock for "default-k8s-diff-port-689000"
	I0806 01:05:33.711176    6390 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-689000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-689000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:33.711204    6390 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:33.719609    6390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:33.737166    6390 start.go:159] libmachine.API.Create for "default-k8s-diff-port-689000" (driver="qemu2")
	I0806 01:05:33.737199    6390 client.go:168] LocalClient.Create starting
	I0806 01:05:33.737262    6390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:33.737295    6390 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:33.737308    6390 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:33.737343    6390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:33.737364    6390 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:33.737372    6390 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:33.737726    6390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:33.896201    6390 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:33.931602    6390 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:33.931607    6390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:33.931787    6390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:33.941086    6390 main.go:141] libmachine: STDOUT: 
	I0806 01:05:33.941102    6390 main.go:141] libmachine: STDERR: 
	I0806 01:05:33.941158    6390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2 +20000M
	I0806 01:05:33.948886    6390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:33.948905    6390 main.go:141] libmachine: STDERR: 
	I0806 01:05:33.948923    6390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:33.948932    6390 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:33.948940    6390 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:33.948964    6390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:2b:5d:0e:4e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:33.950671    6390 main.go:141] libmachine: STDOUT: 
	I0806 01:05:33.950685    6390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:33.950702    6390 client.go:171] duration metric: took 213.501459ms to LocalClient.Create
	I0806 01:05:35.952859    6390 start.go:128] duration metric: took 2.241650542s to createHost
	I0806 01:05:35.952936    6390 start.go:83] releasing machines lock for "default-k8s-diff-port-689000", held for 2.241775417s
	W0806 01:05:35.953062    6390 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:35.966971    6390 out.go:177] * Deleting "default-k8s-diff-port-689000" in qemu2 ...
	W0806 01:05:35.994533    6390 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:35.994558    6390 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:40.995604    6390 start.go:360] acquireMachinesLock for default-k8s-diff-port-689000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:40.995719    6390 start.go:364] duration metric: took 85.875µs to acquireMachinesLock for "default-k8s-diff-port-689000"
	I0806 01:05:40.995732    6390 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-689000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-689000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:40.995773    6390 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:40.999501    6390 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:41.015658    6390 start.go:159] libmachine.API.Create for "default-k8s-diff-port-689000" (driver="qemu2")
	I0806 01:05:41.015679    6390 client.go:168] LocalClient.Create starting
	I0806 01:05:41.015746    6390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:41.015779    6390 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:41.015790    6390 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:41.015847    6390 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:41.015871    6390 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:41.015880    6390 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:41.016145    6390 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:41.263503    6390 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:41.360712    6390 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:41.360720    6390 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:41.360900    6390 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:41.370253    6390 main.go:141] libmachine: STDOUT: 
	I0806 01:05:41.370272    6390 main.go:141] libmachine: STDERR: 
	I0806 01:05:41.370323    6390 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2 +20000M
	I0806 01:05:41.378444    6390 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:41.378457    6390 main.go:141] libmachine: STDERR: 
	I0806 01:05:41.378466    6390 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:41.378476    6390 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:41.378488    6390 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:41.378510    6390 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0d:df:3c:fc:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:41.380064    6390 main.go:141] libmachine: STDOUT: 
	I0806 01:05:41.380080    6390 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:41.380091    6390 client.go:171] duration metric: took 364.410667ms to LocalClient.Create
	I0806 01:05:43.382246    6390 start.go:128] duration metric: took 2.386469792s to createHost
	I0806 01:05:43.382341    6390 start.go:83] releasing machines lock for "default-k8s-diff-port-689000", held for 2.386598334s
	W0806 01:05:43.382803    6390 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-689000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-689000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:43.400096    6390 out.go:177] 
	W0806 01:05:43.406296    6390 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:43.406334    6390 out.go:239] * 
	* 
	W0806 01:05:43.409018    6390 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:43.416108    6390 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-689000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (57.807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-601000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-601000 create -f testdata/busybox.yaml: exit status 1 (30.328084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-601000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (28.431958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (28.513292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-601000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-601000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-601000 describe deploy/metrics-server -n kube-system: exit status 1 (26.603084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-601000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (28.868417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (7.498775542s)

                                                
                                                
-- stdout --
	* [embed-certs-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	* Restarting existing qemu2 VM for "embed-certs-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:41.008955    6431 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:41.009077    6431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:41.009080    6431 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:41.009083    6431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:41.009197    6431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:41.010147    6431 out.go:298] Setting JSON to false
	I0806 01:05:41.028327    6431 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3909,"bootTime":1722927632,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:41.028415    6431 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:41.034564    6431 out.go:177] * [embed-certs-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:41.042631    6431 notify.go:220] Checking for updates...
	I0806 01:05:41.047324    6431 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:41.055477    6431 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:41.065523    6431 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:41.072463    6431 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:41.079365    6431 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:41.087473    6431 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:41.093745    6431 config.go:182] Loaded profile config "embed-certs-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:41.094022    6431 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:41.098543    6431 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 01:05:41.109545    6431 start.go:297] selected driver: qemu2
	I0806 01:05:41.109552    6431 start.go:901] validating driver "qemu2" against &{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:41.109638    6431 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:41.112571    6431 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:05:41.112602    6431 cni.go:84] Creating CNI manager for ""
	I0806 01:05:41.112611    6431 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:05:41.112647    6431 start.go:340] cluster config:
	{Name:embed-certs-601000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-601000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:41.117220    6431 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:41.122636    6431 out.go:177] * Starting "embed-certs-601000" primary control-plane node in "embed-certs-601000" cluster
	I0806 01:05:41.130534    6431 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:05:41.130554    6431 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:05:41.130566    6431 cache.go:56] Caching tarball of preloaded images
	I0806 01:05:41.130652    6431 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:05:41.130658    6431 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:05:41.130741    6431 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/embed-certs-601000/config.json ...
	I0806 01:05:41.131313    6431 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:43.382498    6431 start.go:364] duration metric: took 2.251172s to acquireMachinesLock for "embed-certs-601000"
	I0806 01:05:43.382651    6431 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:43.382818    6431 fix.go:54] fixHost starting: 
	I0806 01:05:43.383598    6431 fix.go:112] recreateIfNeeded on embed-certs-601000: state=Stopped err=<nil>
	W0806 01:05:43.383656    6431 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:43.400039    6431 out.go:177] * Restarting existing qemu2 VM for "embed-certs-601000" ...
	I0806 01:05:43.409140    6431 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:43.409321    6431 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:9d:f5:b5:a1:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:43.418854    6431 main.go:141] libmachine: STDOUT: 
	I0806 01:05:43.418969    6431 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:43.419102    6431 fix.go:56] duration metric: took 36.38875ms for fixHost
	I0806 01:05:43.419127    6431 start.go:83] releasing machines lock for "embed-certs-601000", held for 36.589ms
	W0806 01:05:43.419159    6431 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:43.419318    6431 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:43.419334    6431 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:48.420118    6431 start.go:360] acquireMachinesLock for embed-certs-601000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:48.420480    6431 start.go:364] duration metric: took 287.833µs to acquireMachinesLock for "embed-certs-601000"
	I0806 01:05:48.420610    6431 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:48.420631    6431 fix.go:54] fixHost starting: 
	I0806 01:05:48.421473    6431 fix.go:112] recreateIfNeeded on embed-certs-601000: state=Stopped err=<nil>
	W0806 01:05:48.421502    6431 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:48.426140    6431 out.go:177] * Restarting existing qemu2 VM for "embed-certs-601000" ...
	I0806 01:05:48.434141    6431 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:48.434388    6431 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:9d:f5:b5:a1:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/embed-certs-601000/disk.qcow2
	I0806 01:05:48.443906    6431 main.go:141] libmachine: STDOUT: 
	I0806 01:05:48.443975    6431 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:48.444044    6431 fix.go:56] duration metric: took 23.411167ms for fixHost
	I0806 01:05:48.444065    6431 start.go:83] releasing machines lock for "embed-certs-601000", held for 23.565541ms
	W0806 01:05:48.444250    6431 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:48.453012    6431 out.go:177] 
	W0806 01:05:48.456922    6431 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:48.456947    6431 out.go:239] * 
	* 
	W0806 01:05:48.459373    6431 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:48.471024    6431 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-601000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (65.092166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-689000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-689000 create -f testdata/busybox.yaml: exit status 1 (29.592292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-689000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-689000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (27.686458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (27.372583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-689000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-689000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-689000 describe deploy/metrics-server -n kube-system: exit status 1 (26.588791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-689000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-689000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (28.360958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-689000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
E0806 01:05:48.419417    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-689000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.188486833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-689000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-689000" primary control-plane node in "default-k8s-diff-port-689000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-689000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-689000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:47.439738    6478 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:47.439875    6478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:47.439879    6478 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:47.439881    6478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:47.440002    6478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:47.440990    6478 out.go:298] Setting JSON to false
	I0806 01:05:47.457114    6478 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3915,"bootTime":1722927632,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:47.457176    6478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:47.461952    6478 out.go:177] * [default-k8s-diff-port-689000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:47.468873    6478 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:47.468924    6478 notify.go:220] Checking for updates...
	I0806 01:05:47.475869    6478 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:47.478927    6478 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:47.481895    6478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:47.484836    6478 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:47.487898    6478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:47.491121    6478 config.go:182] Loaded profile config "default-k8s-diff-port-689000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:47.491392    6478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:47.495809    6478 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 01:05:47.502874    6478 start.go:297] selected driver: qemu2
	I0806 01:05:47.502881    6478 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-689000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-689000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:47.502939    6478 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:47.505250    6478 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:05:47.505274    6478 cni.go:84] Creating CNI manager for ""
	I0806 01:05:47.505281    6478 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:05:47.505310    6478 start.go:340] cluster config:
	{Name:default-k8s-diff-port-689000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-689000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:47.508743    6478 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:47.515893    6478 out.go:177] * Starting "default-k8s-diff-port-689000" primary control-plane node in "default-k8s-diff-port-689000" cluster
	I0806 01:05:47.519856    6478 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:05:47.519871    6478 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 01:05:47.519886    6478 cache.go:56] Caching tarball of preloaded images
	I0806 01:05:47.519941    6478 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:05:47.519947    6478 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:05:47.520008    6478 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/default-k8s-diff-port-689000/config.json ...
	I0806 01:05:47.520521    6478 start.go:360] acquireMachinesLock for default-k8s-diff-port-689000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:47.520550    6478 start.go:364] duration metric: took 23.5µs to acquireMachinesLock for "default-k8s-diff-port-689000"
	I0806 01:05:47.520559    6478 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:47.520567    6478 fix.go:54] fixHost starting: 
	I0806 01:05:47.520682    6478 fix.go:112] recreateIfNeeded on default-k8s-diff-port-689000: state=Stopped err=<nil>
	W0806 01:05:47.520690    6478 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:47.524911    6478 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-689000" ...
	I0806 01:05:47.532817    6478 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:47.532851    6478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0d:df:3c:fc:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:47.534852    6478 main.go:141] libmachine: STDOUT: 
	I0806 01:05:47.534874    6478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:47.534905    6478 fix.go:56] duration metric: took 14.339458ms for fixHost
	I0806 01:05:47.534910    6478 start.go:83] releasing machines lock for "default-k8s-diff-port-689000", held for 14.354833ms
	W0806 01:05:47.534924    6478 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:47.534971    6478 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:47.534976    6478 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:52.535391    6478 start.go:360] acquireMachinesLock for default-k8s-diff-port-689000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:52.535933    6478 start.go:364] duration metric: took 426.541µs to acquireMachinesLock for "default-k8s-diff-port-689000"
	I0806 01:05:52.536053    6478 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:05:52.536077    6478 fix.go:54] fixHost starting: 
	I0806 01:05:52.536902    6478 fix.go:112] recreateIfNeeded on default-k8s-diff-port-689000: state=Stopped err=<nil>
	W0806 01:05:52.536934    6478 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:05:52.553506    6478 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-689000" ...
	I0806 01:05:52.556317    6478 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:52.556562    6478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0d:df:3c:fc:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/default-k8s-diff-port-689000/disk.qcow2
	I0806 01:05:52.566243    6478 main.go:141] libmachine: STDOUT: 
	I0806 01:05:52.566319    6478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:52.566432    6478 fix.go:56] duration metric: took 30.357459ms for fixHost
	I0806 01:05:52.566452    6478 start.go:83] releasing machines lock for "default-k8s-diff-port-689000", held for 30.495125ms
	W0806 01:05:52.566646    6478 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-689000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-689000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:52.575303    6478 out.go:177] 
	W0806 01:05:52.578326    6478 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:52.578351    6478 out.go:239] * 
	* 
	W0806 01:05:52.580982    6478 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:52.589296    6478 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-689000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (68.17575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-601000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (32.968584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-601000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.756875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (28.763875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-601000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (29.059375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-601000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-601000 --alsologtostderr -v=1: exit status 83 (39.404167ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-601000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-601000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:48.733286    6497 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:48.733448    6497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:48.733456    6497 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:48.733458    6497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:48.733602    6497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:48.733810    6497 out.go:298] Setting JSON to false
	I0806 01:05:48.733817    6497 mustload.go:65] Loading cluster: embed-certs-601000
	I0806 01:05:48.734000    6497 config.go:182] Loaded profile config "embed-certs-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:48.737314    6497 out.go:177] * The control-plane node embed-certs-601000 host is not running: state=Stopped
	I0806 01:05:48.741295    6497 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-601000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-601000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (28.341708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (28.229834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-349000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-349000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.899735459s)

                                                
                                                
-- stdout --
	* [newest-cni-349000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-349000" primary control-plane node in "newest-cni-349000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-349000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:49.054240    6514 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:49.054352    6514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:49.054355    6514 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:49.054357    6514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:49.054496    6514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:49.055725    6514 out.go:298] Setting JSON to false
	I0806 01:05:49.072135    6514 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3917,"bootTime":1722927632,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:05:49.072210    6514 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:05:49.077283    6514 out.go:177] * [newest-cni-349000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:05:49.084240    6514 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:05:49.084281    6514 notify.go:220] Checking for updates...
	I0806 01:05:49.091270    6514 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:05:49.094246    6514 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:05:49.097249    6514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:05:49.100293    6514 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:05:49.103234    6514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:05:49.106518    6514 config.go:182] Loaded profile config "default-k8s-diff-port-689000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:49.106574    6514 config.go:182] Loaded profile config "multinode-508000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:49.106628    6514 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:05:49.111246    6514 out.go:177] * Using the qemu2 driver based on user configuration
	I0806 01:05:49.118198    6514 start.go:297] selected driver: qemu2
	I0806 01:05:49.118204    6514 start.go:901] validating driver "qemu2" against <nil>
	I0806 01:05:49.118209    6514 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:05:49.120471    6514 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0806 01:05:49.120495    6514 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0806 01:05:49.129234    6514 out.go:177] * Automatically selected the socket_vmnet network
	I0806 01:05:49.132418    6514 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0806 01:05:49.132467    6514 cni.go:84] Creating CNI manager for ""
	I0806 01:05:49.132475    6514 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:05:49.132479    6514 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:05:49.132506    6514 start.go:340] cluster config:
	{Name:newest-cni-349000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:05:49.136247    6514 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:05:49.142224    6514 out.go:177] * Starting "newest-cni-349000" primary control-plane node in "newest-cni-349000" cluster
	I0806 01:05:49.146220    6514 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 01:05:49.146236    6514 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 01:05:49.146249    6514 cache.go:56] Caching tarball of preloaded images
	I0806 01:05:49.146327    6514 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:05:49.146340    6514 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 01:05:49.146411    6514 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/newest-cni-349000/config.json ...
	I0806 01:05:49.146425    6514 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/newest-cni-349000/config.json: {Name:mk909d28397abfc4751907ab4d7baecb9c254296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:05:49.146651    6514 start.go:360] acquireMachinesLock for newest-cni-349000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:49.146686    6514 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "newest-cni-349000"
	I0806 01:05:49.146697    6514 start.go:93] Provisioning new machine with config: &{Name:newest-cni-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:49.146741    6514 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:49.155201    6514 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:49.173455    6514 start.go:159] libmachine.API.Create for "newest-cni-349000" (driver="qemu2")
	I0806 01:05:49.173482    6514 client.go:168] LocalClient.Create starting
	I0806 01:05:49.173537    6514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:49.173569    6514 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:49.173580    6514 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:49.173616    6514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:49.173640    6514 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:49.173647    6514 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:49.174035    6514 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:49.330574    6514 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:49.402012    6514 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:49.402017    6514 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:49.402217    6514 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:05:49.411264    6514 main.go:141] libmachine: STDOUT: 
	I0806 01:05:49.411281    6514 main.go:141] libmachine: STDERR: 
	I0806 01:05:49.411324    6514 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2 +20000M
	I0806 01:05:49.419016    6514 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:49.419029    6514 main.go:141] libmachine: STDERR: 
	I0806 01:05:49.419039    6514 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:05:49.419043    6514 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:49.419058    6514 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:49.419085    6514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:19:42:5b:f8:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:05:49.420604    6514 main.go:141] libmachine: STDOUT: 
	I0806 01:05:49.420620    6514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:49.420636    6514 client.go:171] duration metric: took 247.151042ms to LocalClient.Create
	I0806 01:05:51.422799    6514 start.go:128] duration metric: took 2.276050667s to createHost
	I0806 01:05:51.422941    6514 start.go:83] releasing machines lock for "newest-cni-349000", held for 2.276189709s
	W0806 01:05:51.422992    6514 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:51.434554    6514 out.go:177] * Deleting "newest-cni-349000" in qemu2 ...
	W0806 01:05:51.462222    6514 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:51.462252    6514 start.go:729] Will try again in 5 seconds ...
	I0806 01:05:56.464457    6514 start.go:360] acquireMachinesLock for newest-cni-349000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:05:56.465006    6514 start.go:364] duration metric: took 450.458µs to acquireMachinesLock for "newest-cni-349000"
	I0806 01:05:56.465166    6514 start.go:93] Provisioning new machine with config: &{Name:newest-cni-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:05:56.465460    6514 start.go:125] createHost starting for "" (driver="qemu2")
	I0806 01:05:56.470079    6514 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 01:05:56.517818    6514 start.go:159] libmachine.API.Create for "newest-cni-349000" (driver="qemu2")
	I0806 01:05:56.517862    6514 client.go:168] LocalClient.Create starting
	I0806 01:05:56.517987    6514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/ca.pem
	I0806 01:05:56.518055    6514 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:56.518071    6514 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:56.518129    6514 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-965/.minikube/certs/cert.pem
	I0806 01:05:56.518174    6514 main.go:141] libmachine: Decoding PEM data...
	I0806 01:05:56.518191    6514 main.go:141] libmachine: Parsing certificate...
	I0806 01:05:56.518781    6514 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19370-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0806 01:05:56.683683    6514 main.go:141] libmachine: Creating SSH key...
	I0806 01:05:56.861162    6514 main.go:141] libmachine: Creating Disk image...
	I0806 01:05:56.861168    6514 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0806 01:05:56.861378    6514 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2.raw /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:05:56.870821    6514 main.go:141] libmachine: STDOUT: 
	I0806 01:05:56.870838    6514 main.go:141] libmachine: STDERR: 
	I0806 01:05:56.870875    6514 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2 +20000M
	I0806 01:05:56.878563    6514 main.go:141] libmachine: STDOUT: Image resized.
	
	I0806 01:05:56.878578    6514 main.go:141] libmachine: STDERR: 
	I0806 01:05:56.878587    6514 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:05:56.878592    6514 main.go:141] libmachine: Starting QEMU VM...
	I0806 01:05:56.878601    6514 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:05:56.878644    6514 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:fc:76:da:bf:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:05:56.880179    6514 main.go:141] libmachine: STDOUT: 
	I0806 01:05:56.880194    6514 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:05:56.880207    6514 client.go:171] duration metric: took 362.341ms to LocalClient.Create
	I0806 01:05:58.882366    6514 start.go:128] duration metric: took 2.4168585s to createHost
	I0806 01:05:58.882435    6514 start.go:83] releasing machines lock for "newest-cni-349000", held for 2.417418542s
	W0806 01:05:58.882756    6514 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-349000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:05:58.895440    6514 out.go:177] 
	W0806 01:05:58.898606    6514 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:05:58.898636    6514 out.go:239] * 
	* 
	W0806 01:05:58.901826    6514 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:05:58.916320    6514 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-349000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000: exit status 7 (66.115958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-689000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (31.509708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-689000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-689000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-689000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.865125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-689000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-689000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (28.317042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-689000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (28.668209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-689000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-689000 --alsologtostderr -v=1: exit status 83 (39.705667ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-689000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-689000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:05:52.854028    6536 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:05:52.854180    6536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:52.854184    6536 out.go:304] Setting ErrFile to fd 2...
	I0806 01:05:52.854186    6536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:05:52.854300    6536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:05:52.854522    6536 out.go:298] Setting JSON to false
	I0806 01:05:52.854528    6536 mustload.go:65] Loading cluster: default-k8s-diff-port-689000
	I0806 01:05:52.854732    6536 config.go:182] Loaded profile config "default-k8s-diff-port-689000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:05:52.859271    6536 out.go:177] * The control-plane node default-k8s-diff-port-689000 host is not running: state=Stopped
	I0806 01:05:52.863312    6536 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-689000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-689000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (28.6455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (27.902458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-689000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-349000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-349000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.184698209s)

                                                
                                                
-- stdout --
	* [newest-cni-349000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-349000" primary control-plane node in "newest-cni-349000" cluster
	* Restarting existing qemu2 VM for "newest-cni-349000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-349000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:06:03.180141    6588 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:06:03.180381    6588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:06:03.180385    6588 out.go:304] Setting ErrFile to fd 2...
	I0806 01:06:03.180387    6588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:06:03.180517    6588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:06:03.181700    6588 out.go:298] Setting JSON to false
	I0806 01:06:03.198068    6588 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3931,"bootTime":1722927632,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 01:06:03.198142    6588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:06:03.203138    6588 out.go:177] * [newest-cni-349000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 01:06:03.210164    6588 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:06:03.210203    6588 notify.go:220] Checking for updates...
	I0806 01:06:03.217152    6588 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 01:06:03.222246    6588 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 01:06:03.225203    6588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:06:03.228127    6588 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 01:06:03.231176    6588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:06:03.234301    6588 config.go:182] Loaded profile config "newest-cni-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 01:06:03.234567    6588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:06:03.239230    6588 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 01:06:03.245056    6588 start.go:297] selected driver: qemu2
	I0806 01:06:03.245065    6588 start.go:901] validating driver "qemu2" against &{Name:newest-cni-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-349000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:06:03.245124    6588 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:06:03.247558    6588 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0806 01:06:03.247607    6588 cni.go:84] Creating CNI manager for ""
	I0806 01:06:03.247614    6588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:06:03.247637    6588 start.go:340] cluster config:
	{Name:newest-cni-349000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-349000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:06:03.251348    6588 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:06:03.259128    6588 out.go:177] * Starting "newest-cni-349000" primary control-plane node in "newest-cni-349000" cluster
	I0806 01:06:03.263128    6588 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 01:06:03.263145    6588 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 01:06:03.263159    6588 cache.go:56] Caching tarball of preloaded images
	I0806 01:06:03.263225    6588 preload.go:172] Found /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0806 01:06:03.263232    6588 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 01:06:03.263295    6588 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/newest-cni-349000/config.json ...
	I0806 01:06:03.263812    6588 start.go:360] acquireMachinesLock for newest-cni-349000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:06:03.263840    6588 start.go:364] duration metric: took 21.833µs to acquireMachinesLock for "newest-cni-349000"
	I0806 01:06:03.263848    6588 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:06:03.263853    6588 fix.go:54] fixHost starting: 
	I0806 01:06:03.263968    6588 fix.go:112] recreateIfNeeded on newest-cni-349000: state=Stopped err=<nil>
	W0806 01:06:03.263975    6588 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:06:03.267253    6588 out.go:177] * Restarting existing qemu2 VM for "newest-cni-349000" ...
	I0806 01:06:03.275200    6588 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:06:03.275239    6588 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:fc:76:da:bf:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:06:03.277289    6588 main.go:141] libmachine: STDOUT: 
	I0806 01:06:03.277308    6588 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:06:03.277333    6588 fix.go:56] duration metric: took 13.480834ms for fixHost
	I0806 01:06:03.277337    6588 start.go:83] releasing machines lock for "newest-cni-349000", held for 13.49325ms
	W0806 01:06:03.277346    6588 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:06:03.277388    6588 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:06:03.277393    6588 start.go:729] Will try again in 5 seconds ...
	I0806 01:06:08.279650    6588 start.go:360] acquireMachinesLock for newest-cni-349000: {Name:mk21cb8f09732a4bc9d77eca882c4eaa47f247c5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:06:08.280072    6588 start.go:364] duration metric: took 318.208µs to acquireMachinesLock for "newest-cni-349000"
	I0806 01:06:08.280243    6588 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:06:08.280265    6588 fix.go:54] fixHost starting: 
	I0806 01:06:08.281098    6588 fix.go:112] recreateIfNeeded on newest-cni-349000: state=Stopped err=<nil>
	W0806 01:06:08.281133    6588 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:06:08.288505    6588 out.go:177] * Restarting existing qemu2 VM for "newest-cni-349000" ...
	I0806 01:06:08.291707    6588 qemu.go:418] Using hvf for hardware acceleration
	I0806 01:06:08.291975    6588 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:fc:76:da:bf:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19370-965/.minikube/machines/newest-cni-349000/disk.qcow2
	I0806 01:06:08.301545    6588 main.go:141] libmachine: STDOUT: 
	I0806 01:06:08.301613    6588 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0806 01:06:08.301699    6588 fix.go:56] duration metric: took 21.437958ms for fixHost
	I0806 01:06:08.301714    6588 start.go:83] releasing machines lock for "newest-cni-349000", held for 21.615ms
	W0806 01:06:08.301878    6588 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-349000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-349000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0806 01:06:08.310665    6588 out.go:177] 
	W0806 01:06:08.314711    6588 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0806 01:06:08.314760    6588 out.go:239] * 
	* 
	W0806 01:06:08.317098    6588 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:06:08.324696    6588 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-349000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000: exit status 7 (68.322041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-349000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000: exit status 7 (29.287833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-349000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-349000 --alsologtostderr -v=1: exit status 83 (41.151125ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-349000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-349000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:06:08.506493    6604 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:06:08.506653    6604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:06:08.506656    6604 out.go:304] Setting ErrFile to fd 2...
	I0806 01:06:08.506659    6604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:06:08.506779    6604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 01:06:08.506996    6604 out.go:298] Setting JSON to false
	I0806 01:06:08.507003    6604 mustload.go:65] Loading cluster: newest-cni-349000
	I0806 01:06:08.507210    6604 config.go:182] Loaded profile config "newest-cni-349000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0806 01:06:08.511645    6604 out.go:177] * The control-plane node newest-cni-349000 host is not running: state=Stopped
	I0806 01:06:08.515600    6604 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-349000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-349000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000: exit status 7 (29.090083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-349000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000: exit status 7 (28.382208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-349000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (161/278)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 19.92
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 17.88
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 205.83
38 TestAddons/serial/Volcano 37.93
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.29
43 TestAddons/parallel/Ingress 18.71
44 TestAddons/parallel/InspektorGadget 10.22
45 TestAddons/parallel/MetricsServer 5.28
48 TestAddons/parallel/CSI 39.8
49 TestAddons/parallel/Headlamp 17.53
50 TestAddons/parallel/CloudSpanner 5.15
51 TestAddons/parallel/LocalPath 41.93
52 TestAddons/parallel/NvidiaDevicePlugin 5.14
53 TestAddons/parallel/Yakd 10.2
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.59
65 TestErrorSpam/setup 34.46
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.64
69 TestErrorSpam/unpause 0.58
70 TestErrorSpam/stop 64.25
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 88.18
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 38.68
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.56
82 TestFunctional/serial/CacheCmd/cache/add_local 1.09
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.61
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 39.74
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.66
94 TestFunctional/serial/InvalidService 4.49
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 9.44
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.49
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.4
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.38
116 TestFunctional/parallel/NodeLabels 0.07
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.13
120 TestFunctional/parallel/License 0.31
121 TestFunctional/parallel/Version/short 0.03
122 TestFunctional/parallel/Version/components 0.18
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.67
128 TestFunctional/parallel/ImageCommands/Setup 1.76
129 TestFunctional/parallel/DockerEnv/bash 0.27
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 12.08
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.26
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.02
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.09
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.13
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 6.23
161 TestFunctional/parallel/MountCmd/specific-port 0.91
162 TestFunctional/parallel/MountCmd/VerifyCleanup 0.95
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 202.14
170 TestMultiControlPlane/serial/DeployApp 5.06
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 53.2
173 TestMultiControlPlane/serial/NodeLabels 0.18
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.26
175 TestMultiControlPlane/serial/CopyFile 4.35
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.1
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 1.79
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
217 TestMainNoArgs 0.03
264 TestStoppedBinaryUpgrade/Setup 0.98
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
281 TestNoKubernetes/serial/ProfileList 31.48
282 TestNoKubernetes/serial/Stop 2.05
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
296 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
299 TestStartStop/group/old-k8s-version/serial/Stop 3.26
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/no-preload/serial/Stop 3
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
323 TestStartStop/group/embed-certs/serial/Stop 1.82
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.61
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
343 TestStartStop/group/newest-cni/serial/Stop 3.97
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-830000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-830000: exit status 85 (94.434709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-830000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |          |
	|         | -p download-only-830000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:04:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:04:12.808630    1457 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:04:12.808760    1457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:12.808763    1457 out.go:304] Setting ErrFile to fd 2...
	I0806 00:04:12.808770    1457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:12.808891    1457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	W0806 00:04:12.808989    1457 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19370-965/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19370-965/.minikube/config/config.json: no such file or directory
	I0806 00:04:12.810241    1457 out.go:298] Setting JSON to true
	I0806 00:04:12.827335    1457 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":220,"bootTime":1722927632,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:04:12.827403    1457 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:04:12.833139    1457 out.go:97] [download-only-830000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:04:12.833286    1457 notify.go:220] Checking for updates...
	W0806 00:04:12.833313    1457 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball: no such file or directory
	I0806 00:04:12.836157    1457 out.go:169] MINIKUBE_LOCATION=19370
	I0806 00:04:12.842097    1457 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:04:12.847172    1457 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:04:12.848548    1457 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:04:12.851123    1457 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	W0806 00:04:12.857153    1457 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 00:04:12.857399    1457 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:04:12.861119    1457 out.go:97] Using the qemu2 driver based on user configuration
	I0806 00:04:12.861138    1457 start.go:297] selected driver: qemu2
	I0806 00:04:12.861152    1457 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:04:12.861209    1457 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:04:12.865146    1457 out.go:169] Automatically selected the socket_vmnet network
	I0806 00:04:12.871967    1457 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0806 00:04:12.872052    1457 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:04:12.872077    1457 cni.go:84] Creating CNI manager for ""
	I0806 00:04:12.872092    1457 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 00:04:12.872167    1457 start.go:340] cluster config:
	{Name:download-only-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:04:12.877540    1457 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:12.882136    1457 out.go:97] Downloading VM boot image ...
	I0806 00:04:12.882156    1457 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0806 00:04:20.151361    1457 out.go:97] Starting "download-only-830000" primary control-plane node in "download-only-830000" cluster
	I0806 00:04:20.151399    1457 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:04:20.206348    1457 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 00:04:20.206370    1457 cache.go:56] Caching tarball of preloaded images
	I0806 00:04:20.206526    1457 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:04:20.210746    1457 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0806 00:04:20.210768    1457 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:20.287368    1457 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0806 00:04:29.138093    1457 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:29.138280    1457 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:29.833264    1457 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0806 00:04:29.833465    1457 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-830000/config.json ...
	I0806 00:04:29.833486    1457 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-830000/config.json: {Name:mk241b18476bf4c8f435537a1572cd00aba13ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:04:29.833732    1457 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:04:29.833933    1457 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0806 00:04:30.191760    1457 out.go:169] 
	W0806 00:04:30.196814    1457 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20 0x108b49d20] Decompressors:map[bz2:0x1400012c578 gz:0x1400012c600 tar:0x1400012c5b0 tar.bz2:0x1400012c5c0 tar.gz:0x1400012c5d0 tar.xz:0x1400012c5e0 tar.zst:0x1400012c5f0 tbz2:0x1400012c5c0 tgz:0x1400012c5d0 txz:0x1400012c5e0 tzst:0x1400012c5f0 xz:0x1400012c608 zip:0x1400012c610 zst:0x1400012c620] Getters:map[file:0x1400054cda0 http:0x14000814460 https:0x140008144b0] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0806 00:04:30.196839    1457 out_reason.go:110] 
	W0806 00:04:30.204766    1457 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:04:30.207531    1457 out.go:169] 
	
	
	* The control-plane node download-only-830000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-830000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-830000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (19.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-868000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-868000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (19.919555042s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (19.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-868000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-868000: exit status 85 (75.365416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-830000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-830000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| delete  | -p download-only-830000        | download-only-830000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| start   | -o=json --download-only        | download-only-868000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-868000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:04:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:04:30.617109    1485 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:04:30.617235    1485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:30.617238    1485 out.go:304] Setting ErrFile to fd 2...
	I0806 00:04:30.617240    1485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:30.617348    1485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:04:30.618362    1485 out.go:298] Setting JSON to true
	I0806 00:04:30.634277    1485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":238,"bootTime":1722927632,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:04:30.634338    1485 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:04:30.638161    1485 out.go:97] [download-only-868000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:04:30.638241    1485 notify.go:220] Checking for updates...
	I0806 00:04:30.642166    1485 out.go:169] MINIKUBE_LOCATION=19370
	I0806 00:04:30.645222    1485 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:04:30.649180    1485 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:04:30.652251    1485 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:04:30.655264    1485 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	W0806 00:04:30.661136    1485 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 00:04:30.661307    1485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:04:30.664167    1485 out.go:97] Using the qemu2 driver based on user configuration
	I0806 00:04:30.664176    1485 start.go:297] selected driver: qemu2
	I0806 00:04:30.664180    1485 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:04:30.664226    1485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:04:30.667127    1485 out.go:169] Automatically selected the socket_vmnet network
	I0806 00:04:30.672195    1485 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0806 00:04:30.672286    1485 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:04:30.672304    1485 cni.go:84] Creating CNI manager for ""
	I0806 00:04:30.672312    1485 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:04:30.672317    1485 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:04:30.672356    1485 start.go:340] cluster config:
	{Name:download-only-868000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-868000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:04:30.675695    1485 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:30.679192    1485 out.go:97] Starting "download-only-868000" primary control-plane node in "download-only-868000" cluster
	I0806 00:04:30.679199    1485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:04:30.732890    1485 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:04:30.732902    1485 cache.go:56] Caching tarball of preloaded images
	I0806 00:04:30.733072    1485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:04:30.738243    1485 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0806 00:04:30.738250    1485 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:30.810301    1485 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0806 00:04:41.964587    1485 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:41.964766    1485 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:42.507669    1485 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:04:42.507847    1485 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-868000/config.json ...
	I0806 00:04:42.507863    1485 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-868000/config.json: {Name:mkd704fd0a419846bd3c1bc9ea067ffee0c95ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:04:42.508106    1485 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:04:42.508233    1485 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-868000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-868000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-868000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (17.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-184000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-184000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (17.877644666s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (17.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-184000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-184000: exit status 85 (74.890833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-830000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-830000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| delete  | -p download-only-830000           | download-only-830000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| start   | -o=json --download-only           | download-only-868000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-868000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| delete  | -p download-only-868000           | download-only-868000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| start   | -o=json --download-only           | download-only-184000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-184000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:04:50
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:04:50.821040    1513 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:04:50.821174    1513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:50.821177    1513 out.go:304] Setting ErrFile to fd 2...
	I0806 00:04:50.821180    1513 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:50.821299    1513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:04:50.822375    1513 out.go:298] Setting JSON to true
	I0806 00:04:50.838452    1513 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":258,"bootTime":1722927632,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:04:50.838509    1513 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:04:50.842585    1513 out.go:97] [download-only-184000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:04:50.842682    1513 notify.go:220] Checking for updates...
	I0806 00:04:50.846604    1513 out.go:169] MINIKUBE_LOCATION=19370
	I0806 00:04:50.851486    1513 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:04:50.854468    1513 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:04:50.857527    1513 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:04:50.860543    1513 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	W0806 00:04:50.865449    1513 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 00:04:50.865588    1513 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:04:50.868482    1513 out.go:97] Using the qemu2 driver based on user configuration
	I0806 00:04:50.868489    1513 start.go:297] selected driver: qemu2
	I0806 00:04:50.868492    1513 start.go:901] validating driver "qemu2" against <nil>
	I0806 00:04:50.868536    1513 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:04:50.871526    1513 out.go:169] Automatically selected the socket_vmnet network
	I0806 00:04:50.876581    1513 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0806 00:04:50.876674    1513 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:04:50.876690    1513 cni.go:84] Creating CNI manager for ""
	I0806 00:04:50.876698    1513 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:04:50.876706    1513 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:04:50.876746    1513 start.go:340] cluster config:
	{Name:download-only-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:04:50.880002    1513 iso.go:125] acquiring lock: {Name:mk076faf878d5418246851f5d7220c29df4bb994 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:50.882432    1513 out.go:97] Starting "download-only-184000" primary control-plane node in "download-only-184000" cluster
	I0806 00:04:50.882439    1513 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 00:04:50.942502    1513 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 00:04:50.942518    1513 cache.go:56] Caching tarball of preloaded images
	I0806 00:04:50.942735    1513 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 00:04:50.946923    1513 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0806 00:04:50.946930    1513 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:51.024712    1513 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0806 00:04:58.079717    1513 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:58.079892    1513 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19370-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0806 00:04:58.602704    1513 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 00:04:58.602920    1513 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-184000/config.json ...
	I0806 00:04:58.602935    1513 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/download-only-184000/config.json: {Name:mk9d20a69d5c6135107b318cf3f34e79b3640bd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:04:58.603181    1513 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 00:04:58.603298    1513 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19370-965/.minikube/cache/darwin/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-184000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-184000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-184000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-435000 --alsologtostderr --binary-mirror http://127.0.0.1:49324 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-435000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-435000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-585000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-585000: exit status 85 (55.32475ms)

                                                
                                                
-- stdout --
	* Profile "addons-585000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-585000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-585000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-585000: exit status 85 (51.54375ms)

                                                
                                                
-- stdout --
	* Profile "addons-585000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-585000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (205.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-585000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-585000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m25.829773584s)
--- PASS: TestAddons/Setup (205.83s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.93s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 7.797459ms
addons_test.go:897: volcano-scheduler stabilized in 7.828584ms
addons_test.go:913: volcano-controller stabilized in 7.865584ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-87slb" [349af79f-c1fe-469f-bc64-4e62f4da9d4a] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004177667s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-z4t76" [cf79d6f8-43bd-4e67-afd7-22bf50e8268e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002023125s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-xknfm" [2fbd2d57-2c4b-4022-8733-c9664e97d01e] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.009490584s
addons_test.go:932: (dbg) Run:  kubectl --context addons-585000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-585000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-585000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3f8b2047-40bd-4c00-aecb-45c3240249f6] Pending
helpers_test.go:344: "test-job-nginx-0" [3f8b2047-40bd-4c00-aecb-45c3240249f6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [3f8b2047-40bd-4c00-aecb-45c3240249f6] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004293042s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable volcano --alsologtostderr -v=1: (9.713353916s)
--- PASS: TestAddons/serial/Volcano (37.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-585000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-585000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.246667ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-kt8rc" [8b170ca7-d76f-4567-9f8e-24d12bcf12aa] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004320542s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g997r" [832630b4-8684-440a-9d77-a3aa041bbec9] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00440875s
addons_test.go:342: (dbg) Run:  kubectl --context addons-585000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-585000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-585000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.999740916s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 ip
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.29s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-585000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-585000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-585000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [89c2d0b5-1978-40fa-bb0b-fddc35c5e260] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [89c2d0b5-1978-40fa-bb0b-fddc35c5e260] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003813s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-585000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable ingress --alsologtostderr -v=1: (7.232138417s)
--- PASS: TestAddons/parallel/Ingress (18.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-r66hn" [39a2f21e-b756-4464-a194-fe1c43595944] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002327125s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-585000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-585000: (5.212330041s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.414792ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-rvkkj" [f1a8162d-0e5a-4fc7-8d32-773be124e99a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003903791s
addons_test.go:417: (dbg) Run:  kubectl --context addons-585000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.947042ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/08/06 00:09:42 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:580: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a688768a-f67e-4ca6-b2ba-c3917f07aed7] Pending
helpers_test.go:344: "task-pv-pod" [a688768a-f67e-4ca6-b2ba-c3917f07aed7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a688768a-f67e-4ca6-b2ba-c3917f07aed7] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004182125s
addons_test.go:590: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-585000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-585000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-585000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-585000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9bc23c87-f58e-4862-a279-920ab5a4f30b] Pending
helpers_test.go:344: "task-pv-pod-restore" [9bc23c87-f58e-4862-a279-920ab5a4f30b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9bc23c87-f58e-4862-a279-920ab5a4f30b] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004792166s
addons_test.go:632: (dbg) Run:  kubectl --context addons-585000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-585000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-585000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.098131541s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-585000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-vlkwm" [4f721eac-099a-42a1-8f0d-40d0e7493012] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-vlkwm" [4f721eac-099a-42a1-8f0d-40d0e7493012] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004392833s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable headlamp --alsologtostderr -v=1: (5.193677209s)
--- PASS: TestAddons/parallel/Headlamp (17.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-r5k7s" [7b9ac2d1-8454-4e70-b416-c71df3e0421a] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003832834s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-585000
--- PASS: TestAddons/parallel/CloudSpanner (5.15s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-585000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-585000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [75a57c38-0965-43ad-93c5-74eed303858a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [75a57c38-0965-43ad-93c5-74eed303858a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [75a57c38-0965-43ad-93c5-74eed303858a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004257625s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-585000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 ssh "cat /opt/local-path-provisioner/pvc-b220d404-4032-42d1-99ee-b95ea60a5750_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-585000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-585000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.469900875s)
--- PASS: TestAddons/parallel/LocalPath (41.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nk7s7" [d8f2fd2a-7aa2-455a-94fc-7464ca70e842] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004093959s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-585000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-j6w48" [9810e42f-834d-499a-9437-d81c2c552854] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003725208s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-585000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-585000 addons disable yakd --alsologtostderr -v=1: (5.191586333s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-585000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-585000: (12.200434083s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-585000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-585000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-585000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.59s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.59s)

                                                
                                    
x
+
TestErrorSpam/setup (34.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-327000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-327000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 --driver=qemu2 : (34.457873s)
--- PASS: TestErrorSpam/setup (34.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (64.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 stop: (12.192163417s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 stop: (26.028581s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-327000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-327000 stop: (26.029722708s)
--- PASS: TestErrorSpam/stop (64.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19370-965/.minikube/files/etc/test/nested/copy/1455/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-804000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0806 00:13:35.486343    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:35.492895    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:35.504164    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:35.526274    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:35.568385    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:35.650497    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:35.812578    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:36.134695    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:36.776851    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:38.058999    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:40.620902    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:45.741523    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:13:55.983593    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:14:16.465634    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-804000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m28.179509709s)
--- PASS: TestFunctional/serial/StartWithProxy (88.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-804000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-804000 --alsologtostderr -v=8: (38.683676625s)
functional_test.go:659: soft start took 38.684064292s for "functional-804000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-804000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cache add registry.k8s.io/pause:latest
E0806 00:14:57.427618    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1350982402/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cache add minikube-local-cache-test:functional-804000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cache delete minikube-local-cache-test:functional-804000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-804000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (65.94675ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 kubectl -- --context functional-804000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-804000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-804000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-804000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.742370625s)
functional_test.go:757: restart took 39.74249075s for "functional-804000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-804000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3944618806/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-804000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-804000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-804000: exit status 115 (97.291875ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30862 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-804000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-804000 delete -f testdata/invalidsvc.yaml: (1.304066042s)
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 config get cpus: exit status 14 (32.717542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 config get cpus: exit status 14 (29.162708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-804000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-804000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2418: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-804000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-804000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.748ms)

                                                
                                                
-- stdout --
	* [functional-804000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:16:32.146244    2394 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:16:32.146374    2394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:16:32.146377    2394 out.go:304] Setting ErrFile to fd 2...
	I0806 00:16:32.146380    2394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:16:32.146515    2394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:16:32.147568    2394 out.go:298] Setting JSON to false
	I0806 00:16:32.165925    2394 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":960,"bootTime":1722927632,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:16:32.165996    2394 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:16:32.170527    2394 out.go:177] * [functional-804000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0806 00:16:32.177536    2394 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:16:32.177625    2394 notify.go:220] Checking for updates...
	I0806 00:16:32.184506    2394 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:16:32.187508    2394 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:16:32.190496    2394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:16:32.193509    2394 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:16:32.196448    2394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:16:32.199818    2394 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:16:32.200075    2394 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:16:32.204507    2394 out.go:177] * Using the qemu2 driver based on existing profile
	I0806 00:16:32.211467    2394 start.go:297] selected driver: qemu2
	I0806 00:16:32.211472    2394 start.go:901] validating driver "qemu2" against &{Name:functional-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:16:32.211523    2394 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:16:32.217265    2394 out.go:177] 
	W0806 00:16:32.221480    2394 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0806 00:16:32.225456    2394 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-804000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-804000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-804000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (127.878ms)

                                                
                                                
-- stdout --
	* [functional-804000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:16:32.362590    2405 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:16:32.362700    2405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:16:32.362704    2405 out.go:304] Setting ErrFile to fd 2...
	I0806 00:16:32.362707    2405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:16:32.362847    2405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
	I0806 00:16:32.364286    2405 out.go:298] Setting JSON to false
	I0806 00:16:32.383011    2405 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":960,"bootTime":1722927632,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0806 00:16:32.383112    2405 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:16:32.387479    2405 out.go:177] * [functional-804000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0806 00:16:32.393507    2405 notify.go:220] Checking for updates...
	I0806 00:16:32.397450    2405 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:16:32.406458    2405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	I0806 00:16:32.414378    2405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0806 00:16:32.423448    2405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:16:32.427494    2405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	I0806 00:16:32.430506    2405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:16:32.433666    2405 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:16:32.433909    2405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:16:32.438460    2405 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0806 00:16:32.445445    2405 start.go:297] selected driver: qemu2
	I0806 00:16:32.445452    2405 start.go:901] validating driver "qemu2" against &{Name:functional-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:16:32.445500    2405 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:16:32.451441    2405 out.go:177] 
	W0806 00:16:32.454456    2405 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0806 00:16:32.458450    2405 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e4cc646b-5d5c-47c7-828b-ec853a13964b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003708208s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-804000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-804000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-804000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-804000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a361b88a-9b92-49d9-8b12-68450e7788a9] Pending
helpers_test.go:344: "sp-pod" [a361b88a-9b92-49d9-8b12-68450e7788a9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a361b88a-9b92-49d9-8b12-68450e7788a9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004125583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-804000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-804000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-804000 delete -f testdata/storage-provisioner/pod.yaml: (1.084046959s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-804000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [de769b8e-374b-43c6-b6d8-b64ab9ff1848] Pending
E0806 00:16:19.331170    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [de769b8e-374b-43c6-b6d8-b64ab9ff1848] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [de769b8e-374b-43c6-b6d8-b64ab9ff1848] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003754541s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-804000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh -n functional-804000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cp functional-804000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd138501457/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh -n functional-804000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh -n functional-804000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1455/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo cat /etc/test/nested/copy/1455/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1455.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo cat /etc/ssl/certs/1455.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1455.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo cat /usr/share/ca-certificates/1455.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14552.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo cat /etc/ssl/certs/14552.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14552.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo cat /usr/share/ca-certificates/14552.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-804000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh "sudo systemctl is-active crio": exit status 1 (128.81175ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-804000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-804000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-804000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-804000 image ls --format short --alsologtostderr:
I0806 00:16:35.454675    2458 out.go:291] Setting OutFile to fd 1 ...
I0806 00:16:35.454829    2458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.454836    2458 out.go:304] Setting ErrFile to fd 2...
I0806 00:16:35.454839    2458 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.454980    2458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
I0806 00:16:35.455379    2458 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.455436    2458 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.456252    2458 ssh_runner.go:195] Run: systemctl --version
I0806 00:16:35.456261    2458 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
I0806 00:16:35.480381    2458 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-804000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-804000 | 1b32db1098c60 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| docker.io/kicbase/echo-server               | functional-804000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-804000 image ls --format table --alsologtostderr:
I0806 00:16:35.670784    2464 out.go:291] Setting OutFile to fd 1 ...
I0806 00:16:35.670939    2464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.670942    2464 out.go:304] Setting ErrFile to fd 2...
I0806 00:16:35.670945    2464 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.671083    2464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
I0806 00:16:35.671542    2464 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.671605    2464 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.672379    2464 ssh_runner.go:195] Run: systemctl --version
I0806 00:16:35.672386    2464 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
I0806 00:16:35.696833    2464 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-804000 image ls --format json --alsologtostderr:
[{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1b32db1098c607108c474081092d371e322ea70d808793ceef293007d588bd49","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-804000"],"size":"
30"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-804000"],"size":"4780000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c7
48419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-804000 image ls --format json --alsologtostderr:
I0806 00:16:35.602496    2462 out.go:291] Setting OutFile to fd 1 ...
I0806 00:16:35.602631    2462 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.602635    2462 out.go:304] Setting ErrFile to fd 2...
I0806 00:16:35.602638    2462 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.602813    2462 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
I0806 00:16:35.603231    2462 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.603291    2462 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.604121    2462 ssh_runner.go:195] Run: systemctl --version
I0806 00:16:35.604130    2462 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
I0806 00:16:35.627231    2462 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-804000 image ls --format yaml --alsologtostderr:
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 1b32db1098c607108c474081092d371e322ea70d808793ceef293007d588bd49
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-804000
size: "30"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-804000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-804000 image ls --format yaml --alsologtostderr:
I0806 00:16:35.534965    2460 out.go:291] Setting OutFile to fd 1 ...
I0806 00:16:35.535114    2460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.535117    2460 out.go:304] Setting ErrFile to fd 2...
I0806 00:16:35.535120    2460 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.535250    2460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
I0806 00:16:35.535651    2460 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.535713    2460 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.536513    2460 ssh_runner.go:195] Run: systemctl --version
I0806 00:16:35.536522    2460 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
I0806 00:16:35.558761    2460 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh pgrep buildkitd: exit status 1 (55.421125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image build -t localhost/my-image:functional-804000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-804000 image build -t localhost/my-image:functional-804000 testdata/build --alsologtostderr: (1.530982334s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-804000 image build -t localhost/my-image:functional-804000 testdata/build --alsologtostderr:
I0806 00:16:35.792250    2468 out.go:291] Setting OutFile to fd 1 ...
I0806 00:16:35.792502    2468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.792505    2468 out.go:304] Setting ErrFile to fd 2...
I0806 00:16:35.792510    2468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:16:35.792629    2468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-965/.minikube/bin
I0806 00:16:35.793063    2468 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.793871    2468 config.go:182] Loaded profile config "functional-804000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:16:35.794717    2468 ssh_runner.go:195] Run: systemctl --version
I0806 00:16:35.794725    2468 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-965/.minikube/machines/functional-804000/id_rsa Username:docker}
I0806 00:16:35.816928    2468 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2130914667.tar
I0806 00:16:35.816988    2468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0806 00:16:35.820869    2468 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2130914667.tar
I0806 00:16:35.822405    2468 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2130914667.tar: stat -c "%s %y" /var/lib/minikube/build/build.2130914667.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2130914667.tar': No such file or directory
I0806 00:16:35.822419    2468 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2130914667.tar --> /var/lib/minikube/build/build.2130914667.tar (3072 bytes)
I0806 00:16:35.831222    2468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2130914667
I0806 00:16:35.834624    2468 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2130914667 -xf /var/lib/minikube/build/build.2130914667.tar
I0806 00:16:35.837713    2468 docker.go:360] Building image: /var/lib/minikube/build/build.2130914667
I0806 00:16:35.837759    2468 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-804000 /var/lib/minikube/build/build.2130914667
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:d9c34802b4d42be607db5eab9d34a71b75f1865ffb686f70d244a2630151e9d5 done
#8 naming to localhost/my-image:functional-804000 done
#8 DONE 0.0s
I0806 00:16:37.280021    2468 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-804000 /var/lib/minikube/build/build.2130914667: (1.44241275s)
I0806 00:16:37.280089    2468 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2130914667
I0806 00:16:37.285856    2468 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2130914667.tar
I0806 00:16:37.290874    2468 build_images.go:217] Built localhost/my-image:functional-804000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2130914667.tar
I0806 00:16:37.290896    2468 build_images.go:133] succeeded building to: functional-804000
I0806 00:16:37.290898    2468 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls
2024/08/06 00:16:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.7408955s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-804000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-804000 docker-env) && out/minikube-darwin-arm64 status -p functional-804000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-804000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-804000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-804000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-pdwsk" [114b0f17-fdda-44e5-a6c6-12295b2583e8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-pdwsk" [114b0f17-fdda-44e5-a6c6-12295b2583e8] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.002543791s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image load --daemon docker.io/kicbase/echo-server:functional-804000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image load --daemon docker.io/kicbase/echo-server:functional-804000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-804000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image load --daemon docker.io/kicbase/echo-server:functional-804000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image save docker.io/kicbase/echo-server:functional-804000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image rm docker.io/kicbase/echo-server:functional-804000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-804000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 image save --daemon docker.io/kicbase/echo-server:functional-804000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-804000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-804000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-804000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-804000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-804000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2271: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-804000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-804000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d113b3da-ec87-4768-8956-99251522930a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d113b3da-ec87-4768-8956-99251522930a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.002702625s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 service list -o json
functional_test.go:1490: Took "78.857ms" to run "out/minikube-darwin-arm64 -p functional-804000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30500
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30500
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-804000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.39.127 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-804000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "92.72975ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.829667ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "85.515709ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.747167ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3089474598/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722928586966948000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3089474598/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722928586966948000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3089474598/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722928586966948000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3089474598/001/test-1722928586966948000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (54.557125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (54.452542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  6 07:16 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  6 07:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  6 07:16 test-1722928586966948000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh cat /mount-9p/test-1722928586966948000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-804000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b12365c6-b36a-4209-866c-ce843ababa8a] Pending
helpers_test.go:344: "busybox-mount" [b12365c6-b36a-4209-866c-ce843ababa8a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b12365c6-b36a-4209-866c-ce843ababa8a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b12365c6-b36a-4209-866c-ce843ababa8a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004147333s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-804000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3089474598/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1368701992/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.739833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1368701992/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh "sudo umount -f /mount-9p": exit status 1 (58.981917ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-804000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1368701992/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2648906158/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2648906158/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2648906158/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T" /mount1: exit status 1 (72.770291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-804000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-804000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2648906158/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2648906158/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-804000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2648906158/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.95s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-804000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-804000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-804000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-597000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0806 00:18:35.461364    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
E0806 00:19:03.166736    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/addons-585000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-597000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m21.948962125s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (202.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-597000 -- rollout status deployment/busybox: (3.391572375s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-9sjvr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-fsgt2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-wjptk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-9sjvr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-fsgt2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-wjptk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-9sjvr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-fsgt2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-wjptk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-9sjvr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-9sjvr -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-fsgt2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-fsgt2 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-wjptk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-597000 -- exec busybox-fc5497c4f-wjptk -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-597000 -v=7 --alsologtostderr
E0806 00:20:48.423046    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:48.428626    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:48.440215    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:48.460680    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:48.502783    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:48.584916    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:48.747010    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:49.069105    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:49.709421    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:50.991550    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:53.551899    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:20:58.674049    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-597000 -v=7 --alsologtostderr: (52.968533375s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-597000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp testdata/cp-test.txt ha-597000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile226172059/001/cp-test_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt ha-597000-m02:/home/docker/cp-test_ha-597000_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test_ha-597000_ha-597000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt ha-597000-m03:/home/docker/cp-test_ha-597000_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test_ha-597000_ha-597000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000:/home/docker/cp-test.txt ha-597000-m04:/home/docker/cp-test_ha-597000_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test_ha-597000_ha-597000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp testdata/cp-test.txt ha-597000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile226172059/001/cp-test_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt ha-597000:/home/docker/cp-test_ha-597000-m02_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test_ha-597000-m02_ha-597000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt ha-597000-m03:/home/docker/cp-test_ha-597000-m02_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test_ha-597000-m02_ha-597000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m02:/home/docker/cp-test.txt ha-597000-m04:/home/docker/cp-test_ha-597000-m02_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test_ha-597000-m02_ha-597000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp testdata/cp-test.txt ha-597000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile226172059/001/cp-test_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt ha-597000:/home/docker/cp-test_ha-597000-m03_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test_ha-597000-m03_ha-597000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt ha-597000-m02:/home/docker/cp-test_ha-597000-m03_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test_ha-597000-m03_ha-597000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m03:/home/docker/cp-test.txt ha-597000-m04:/home/docker/cp-test_ha-597000-m03_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test_ha-597000-m03_ha-597000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp testdata/cp-test.txt ha-597000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile226172059/001/cp-test_ha-597000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt ha-597000:/home/docker/cp-test_ha-597000-m04_ha-597000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000 "sudo cat /home/docker/cp-test_ha-597000-m04_ha-597000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt ha-597000-m02:/home/docker/cp-test_ha-597000-m04_ha-597000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m02 "sudo cat /home/docker/cp-test_ha-597000-m04_ha-597000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 cp ha-597000-m04:/home/docker/cp-test.txt ha-597000-m03:/home/docker/cp-test_ha-597000-m04_ha-597000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-597000 ssh -n ha-597000-m03 "sudo cat /home/docker/cp-test_ha-597000-m04_ha-597000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0806 00:35:48.402862    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
E0806 00:37:11.466050    1455 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-965/.minikube/profiles/functional-804000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.097143417s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-537000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-537000 --output=json --user=testUser: (1.790675417s)
--- PASS: TestJSONOutput/stop/Command (1.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-945000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-945000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.924166ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"10f340dc-238e-4b46-9977-af6a173708f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-945000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"29cfc64c-d0f7-4c12-b164-797e97d26ce4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"40f9c7e0-2dc9-4164-9618-493d13e1cbd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig"}}
	{"specversion":"1.0","id":"dbf2acd5-9efa-4dc6-b6b4-566d57d490e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1939c5a8-d274-42a5-a2de-e56a08e5c320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"334d5bcd-fb52-4911-8565-b6273c3c6a07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube"}}
	{"specversion":"1.0","id":"52ff1199-3061-4d39-926b-e8fa51cd4471","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f85b1bd1-d4f5-4f2e-ad21-031702b226a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-945000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-945000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-825000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.746208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-825000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-825000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-825000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.819917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-825000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-825000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.685076584s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.790053s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-825000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-825000: (2.050830542s)
--- PASS: TestNoKubernetes/serial/Stop (2.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-825000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-825000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (38.460042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-825000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-825000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-180000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-295000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-295000 --alsologtostderr -v=3: (3.2574815s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-295000 -n old-k8s-version-295000: exit status 7 (50.610666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-295000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-244000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-244000 --alsologtostderr -v=3: (2.997159833s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-244000 -n no-preload-244000: exit status 7 (53.1ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-244000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-601000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-601000 --alsologtostderr -v=3: (1.817373375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-601000 -n embed-certs-601000: exit status 7 (55.220375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-601000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-689000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-689000 --alsologtostderr -v=3: (3.606202166s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-689000 -n default-k8s-diff-port-689000: exit status 7 (56.031042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-689000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-349000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-349000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-349000 --alsologtostderr -v=3: (3.967120708s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-349000 -n newest-cni-349000: exit status 7 (58.240292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-349000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/278)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-187000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-187000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-187000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-187000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-187000"

                                                
                                                
----------------------- debugLogs end: cilium-187000 [took: 2.178500417s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-187000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-187000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-567000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-567000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard