Test Report: QEMU_macOS 19461

                    
                      ee4f5fb2e73abafca70b3598ab7977372efc25a8:2024-08-16:35814
                    
                

Test fail (97/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.11
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.05
46 TestCertOptions 10.26
47 TestCertExpiration 195.79
48 TestDockerFlags 10.18
49 TestForceSystemdFlag 10.18
50 TestForceSystemdEnv 10.84
95 TestFunctional/parallel/ServiceCmdConnect 35.3
167 TestMultiControlPlane/serial/StopSecondaryNode 214.13
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 102.96
169 TestMultiControlPlane/serial/RestartSecondaryNode 257.37
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 283.48
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.74
174 TestMultiControlPlane/serial/StopCluster 251.18
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.12
184 TestJSONOutput/start/Command 9.79
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.14
216 TestMountStart/serial/StartWithMountFirst 9.98
219 TestMultiNode/serial/FreshStart2Nodes 9.98
220 TestMultiNode/serial/DeployApp2Nodes 102.03
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.07
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 55.29
228 TestMultiNode/serial/RestartKeepsNodes 8.41
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.97
231 TestMultiNode/serial/RestartMultiNode 5.25
232 TestMultiNode/serial/ValidateNameConflict 20.26
236 TestPreload 9.92
238 TestScheduledStopUnix 10.13
239 TestSkaffold 13.3
242 TestRunningBinaryUpgrade 599.39
244 TestKubernetesUpgrade 18.46
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.34
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.45
260 TestStoppedBinaryUpgrade/Upgrade 574.09
262 TestPause/serial/Start 10.04
272 TestNoKubernetes/serial/StartWithK8s 9.85
273 TestNoKubernetes/serial/StartWithStopK8s 5.28
274 TestNoKubernetes/serial/Start 5.28
278 TestNoKubernetes/serial/StartNoArgs 5.3
280 TestNetworkPlugins/group/auto/Start 9.97
281 TestNetworkPlugins/group/flannel/Start 9.9
282 TestNetworkPlugins/group/kindnet/Start 10.01
283 TestNetworkPlugins/group/enable-default-cni/Start 10.08
284 TestNetworkPlugins/group/bridge/Start 9.88
285 TestNetworkPlugins/group/kubenet/Start 9.76
286 TestNetworkPlugins/group/custom-flannel/Start 9.76
287 TestNetworkPlugins/group/calico/Start 9.81
288 TestNetworkPlugins/group/false/Start 9.74
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.83
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/FirstStart 9.98
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.14
309 TestStartStop/group/no-preload/serial/SecondStart 6.35
310 TestStartStop/group/embed-certs/serial/DeployApp 0.09
311 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
315 TestStartStop/group/no-preload/serial/Pause 0.1
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.95
320 TestStartStop/group/embed-certs/serial/SecondStart 6.82
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
326 TestStartStop/group/embed-certs/serial/Pause 0.11
329 TestStartStop/group/newest-cni/serial/FirstStart 9.91
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.29
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
337 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/SecondStart 5.26
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (22.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-511000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-511000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (22.110613791s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d12540fb-c1bb-439d-820e-b362e1d72b75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-511000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fdca891-ddb9-4bfb-9390-1e1eb7a423b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"65e95726-92e4-4298-aaf1-e44d26dc81a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig"}}
	{"specversion":"1.0","id":"f0dae923-3629-4184-bfba-0b6dc01dca2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9776f96c-ea77-4321-a839-b8173fdc5493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d8c52243-ace6-4a36-8210-7c95452ea6f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube"}}
	{"specversion":"1.0","id":"74350cc0-20e3-4020-836a-5f4455fdaa14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a7d38be9-8ba1-45d6-937a-d8dc9a8c29c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"16e03fe8-3616-46b1-b83d-3714993d6817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"caf55c2f-d3f0-4abd-86eb-4e924cdf2208","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"50efc8ae-7a1e-4d1b-a675-9cfc9ffc5d0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-511000\" primary control-plane node in \"download-only-511000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b4a154e-91b5-462f-a583-09a49756a8c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a93a102c-707e-41c7-9359-a1d95b0ab541","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960] Decompressors:map[bz2:0x140008137e0 gz:0x140008137e8 tar:0x14000813790 tar.bz2:0x140008137a0 tar.gz:0x140008137b0 tar.xz:0x140008137c0 tar.zst:0x140008137d0 tbz2:0x140008137a0 tgz:0x14
0008137b0 txz:0x140008137c0 tzst:0x140008137d0 xz:0x140008137f0 zip:0x14000813800 zst:0x140008137f8] Getters:map[file:0x140017d28a0 http:0x14000546190 https:0x140005461e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"6bfffeb4-cb0b-4ef3-b67c-84f1fd6da893","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 09:47:30.873357    2056 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:47:30.873490    2056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:30.873494    2056 out.go:358] Setting ErrFile to fd 2...
	I0816 09:47:30.873497    2056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:30.873627    2056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	W0816 09:47:30.873712    2056 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19461-1189/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19461-1189/.minikube/config/config.json: no such file or directory
	I0816 09:47:30.874950    2056 out.go:352] Setting JSON to true
	I0816 09:47:30.892555    2056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1013,"bootTime":1723825837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 09:47:30.892628    2056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:47:30.897982    2056 out.go:97] [download-only-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 09:47:30.898151    2056 notify.go:220] Checking for updates...
	W0816 09:47:30.898177    2056 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 09:47:30.900868    2056 out.go:169] MINIKUBE_LOCATION=19461
	I0816 09:47:30.903957    2056 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 09:47:30.907823    2056 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 09:47:30.910911    2056 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:47:30.913919    2056 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	W0816 09:47:30.919884    2056 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 09:47:30.920131    2056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:47:30.924981    2056 out.go:97] Using the qemu2 driver based on user configuration
	I0816 09:47:30.925005    2056 start.go:297] selected driver: qemu2
	I0816 09:47:30.925022    2056 start.go:901] validating driver "qemu2" against <nil>
	I0816 09:47:30.925111    2056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 09:47:30.928842    2056 out.go:169] Automatically selected the socket_vmnet network
	I0816 09:47:30.934316    2056 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0816 09:47:30.934406    2056 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 09:47:30.934510    2056 cni.go:84] Creating CNI manager for ""
	I0816 09:47:30.934532    2056 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 09:47:30.934592    2056 start.go:340] cluster config:
	{Name:download-only-511000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:47:30.939972    2056 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 09:47:30.944934    2056 out.go:97] Downloading VM boot image ...
	I0816 09:47:30.944958    2056 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0816 09:47:43.296113    2056 out.go:97] Starting "download-only-511000" primary control-plane node in "download-only-511000" cluster
	I0816 09:47:43.296144    2056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:43.361607    2056 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 09:47:43.361631    2056 cache.go:56] Caching tarball of preloaded images
	I0816 09:47:43.361830    2056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:43.365892    2056 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 09:47:43.365899    2056 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:43.455485    2056 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 09:47:51.797162    2056 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:51.797322    2056 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:52.492236    2056 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 09:47:52.492428    2056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/download-only-511000/config.json ...
	I0816 09:47:52.492445    2056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/download-only-511000/config.json: {Name:mkc418fcfc00b5e6e5137590cd2b24f7a7265e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 09:47:52.492658    2056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:52.492856    2056 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0816 09:47:52.906201    2056 out.go:193] 
	W0816 09:47:52.914376    2056 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960] Decompressors:map[bz2:0x140008137e0 gz:0x140008137e8 tar:0x14000813790 tar.bz2:0x140008137a0 tar.gz:0x140008137b0 tar.xz:0x140008137c0 tar.zst:0x140008137d0 tbz2:0x140008137a0 tgz:0x140008137b0 txz:0x140008137c0 tzst:0x140008137d0 xz:0x140008137f0 zip:0x14000813800 zst:0x140008137f8] Getters:map[file:0x140017d28a0 http:0x14000546190 https:0x140005461e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0816 09:47:52.914398    2056 out_reason.go:110] 
	W0816 09:47:52.921274    2056 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 09:47:52.925221    2056 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-511000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (22.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-263000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-263000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.90363975s)

                                                
                                                
-- stdout --
	* [offline-docker-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-263000" primary control-plane node in "offline-docker-263000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-263000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:29:54.603191    4397 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:29:54.603311    4397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:29:54.603314    4397 out.go:358] Setting ErrFile to fd 2...
	I0816 10:29:54.603317    4397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:29:54.603457    4397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:29:54.604512    4397 out.go:352] Setting JSON to false
	I0816 10:29:54.622305    4397 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3557,"bootTime":1723825837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:29:54.622380    4397 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:29:54.627343    4397 out.go:177] * [offline-docker-263000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:29:54.634144    4397 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:29:54.634148    4397 notify.go:220] Checking for updates...
	I0816 10:29:54.640182    4397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:29:54.643164    4397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:29:54.646197    4397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:29:54.649213    4397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:29:54.650369    4397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:29:54.653564    4397 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:29:54.653611    4397 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:29:54.657215    4397 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:29:54.662181    4397 start.go:297] selected driver: qemu2
	I0816 10:29:54.662188    4397 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:29:54.662196    4397 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:29:54.664062    4397 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:29:54.667183    4397 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:29:54.670400    4397 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:29:54.670454    4397 cni.go:84] Creating CNI manager for ""
	I0816 10:29:54.670463    4397 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:29:54.670467    4397 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:29:54.670530    4397 start.go:340] cluster config:
	{Name:offline-docker-263000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:29:54.674464    4397 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:54.682222    4397 out.go:177] * Starting "offline-docker-263000" primary control-plane node in "offline-docker-263000" cluster
	I0816 10:29:54.686068    4397 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:29:54.686090    4397 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:29:54.686103    4397 cache.go:56] Caching tarball of preloaded images
	I0816 10:29:54.686177    4397 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:29:54.686183    4397 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:29:54.686245    4397 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/offline-docker-263000/config.json ...
	I0816 10:29:54.686255    4397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/offline-docker-263000/config.json: {Name:mkb053e9e058c5138c1a0b75f9170a0172426be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:29:54.686567    4397 start.go:360] acquireMachinesLock for offline-docker-263000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:29:54.686601    4397 start.go:364] duration metric: took 25µs to acquireMachinesLock for "offline-docker-263000"
	I0816 10:29:54.686613    4397 start.go:93] Provisioning new machine with config: &{Name:offline-docker-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:29:54.686639    4397 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:29:54.691238    4397 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:29:54.706977    4397 start.go:159] libmachine.API.Create for "offline-docker-263000" (driver="qemu2")
	I0816 10:29:54.707036    4397 client.go:168] LocalClient.Create starting
	I0816 10:29:54.707155    4397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:29:54.707193    4397 main.go:141] libmachine: Decoding PEM data...
	I0816 10:29:54.707208    4397 main.go:141] libmachine: Parsing certificate...
	I0816 10:29:54.707253    4397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:29:54.707276    4397 main.go:141] libmachine: Decoding PEM data...
	I0816 10:29:54.707285    4397 main.go:141] libmachine: Parsing certificate...
	I0816 10:29:54.707654    4397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:29:54.856409    4397 main.go:141] libmachine: Creating SSH key...
	I0816 10:29:54.929194    4397 main.go:141] libmachine: Creating Disk image...
	I0816 10:29:54.929211    4397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:29:54.929406    4397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2
	I0816 10:29:54.939202    4397 main.go:141] libmachine: STDOUT: 
	I0816 10:29:54.939223    4397 main.go:141] libmachine: STDERR: 
	I0816 10:29:54.939287    4397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2 +20000M
	I0816 10:29:54.952580    4397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:29:54.952598    4397 main.go:141] libmachine: STDERR: 
	I0816 10:29:54.952612    4397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2
	I0816 10:29:54.952617    4397 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:29:54.952629    4397 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:29:54.952659    4397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:4e:79:42:34:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2
	I0816 10:29:54.954298    4397 main.go:141] libmachine: STDOUT: 
	I0816 10:29:54.954313    4397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:29:54.954329    4397 client.go:171] duration metric: took 247.280333ms to LocalClient.Create
	I0816 10:29:56.956387    4397 start.go:128] duration metric: took 2.26978825s to createHost
	I0816 10:29:56.956413    4397 start.go:83] releasing machines lock for "offline-docker-263000", held for 2.269855417s
	W0816 10:29:56.956426    4397 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:29:56.961131    4397 out.go:177] * Deleting "offline-docker-263000" in qemu2 ...
	W0816 10:29:56.976300    4397 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:29:56.976314    4397 start.go:729] Will try again in 5 seconds ...
	I0816 10:30:01.978377    4397 start.go:360] acquireMachinesLock for offline-docker-263000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:30:01.978626    4397 start.go:364] duration metric: took 183.708µs to acquireMachinesLock for "offline-docker-263000"
	I0816 10:30:01.978846    4397 start.go:93] Provisioning new machine with config: &{Name:offline-docker-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:30:01.979036    4397 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:30:01.987466    4397 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:30:02.032077    4397 start.go:159] libmachine.API.Create for "offline-docker-263000" (driver="qemu2")
	I0816 10:30:02.032158    4397 client.go:168] LocalClient.Create starting
	I0816 10:30:02.032304    4397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:30:02.032376    4397 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:02.032392    4397 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:02.032473    4397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:30:02.032527    4397 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:02.032542    4397 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:02.033231    4397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:30:02.207313    4397 main.go:141] libmachine: Creating SSH key...
	I0816 10:30:02.411847    4397 main.go:141] libmachine: Creating Disk image...
	I0816 10:30:02.411857    4397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:30:02.412048    4397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2
	I0816 10:30:02.421586    4397 main.go:141] libmachine: STDOUT: 
	I0816 10:30:02.421605    4397 main.go:141] libmachine: STDERR: 
	I0816 10:30:02.421650    4397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2 +20000M
	I0816 10:30:02.429730    4397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:30:02.429755    4397 main.go:141] libmachine: STDERR: 
	I0816 10:30:02.429773    4397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2
	I0816 10:30:02.429778    4397 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:30:02.429783    4397 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:30:02.429807    4397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:9d:62:89:28:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/offline-docker-263000/disk.qcow2
	I0816 10:30:02.431381    4397 main.go:141] libmachine: STDOUT: 
	I0816 10:30:02.431396    4397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:30:02.431413    4397 client.go:171] duration metric: took 399.257792ms to LocalClient.Create
	I0816 10:30:04.433560    4397 start.go:128] duration metric: took 2.454528292s to createHost
	I0816 10:30:04.433620    4397 start.go:83] releasing machines lock for "offline-docker-263000", held for 2.455025042s
	W0816 10:30:04.433924    4397 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:04.445394    4397 out.go:201] 
	W0816 10:30:04.454505    4397 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:30:04.454528    4397 out.go:270] * 
	* 
	W0816 10:30:04.456291    4397 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:30:04.466340    4397 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-263000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-16 10:30:04.477706 -0700 PDT m=+2553.721290168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-263000 -n offline-docker-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-263000 -n offline-docker-263000: exit status 7 (68.465709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-263000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-263000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-263000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestCertOptions (10.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-000000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-000000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.000687916s)

                                                
                                                
-- stdout --
	* [cert-options-000000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-000000" primary control-plane node in "cert-options-000000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-000000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-000000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-000000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-000000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-000000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.273459ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-000000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-000000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-000000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-000000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-000000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-000000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.881375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-000000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-000000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-000000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-000000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-000000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-16 10:30:35.789115 -0700 PDT m=+2585.033367084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-000000 -n cert-options-000000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-000000 -n cert-options-000000: exit status 7 (30.025417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-000000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-000000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-000000
--- FAIL: TestCertOptions (10.26s)

                                                
                                    
x
+
TestCertExpiration (195.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-105000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-105000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.470962083s)

                                                
                                                
-- stdout --
	* [cert-expiration-105000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-105000" primary control-plane node in "cert-expiration-105000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-105000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-105000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-105000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-105000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.1820415s)

                                                
                                                
-- stdout --
	* [cert-expiration-105000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-105000" primary control-plane node in "cert-expiration-105000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-105000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-105000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-105000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-105000" primary control-plane node in "cert-expiration-105000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-105000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-16 10:33:36.16663 -0700 PDT m=+2765.414734376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-105000 -n cert-expiration-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-105000 -n cert-expiration-105000: exit status 7 (57.914333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-105000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-105000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-105000
--- FAIL: TestCertExpiration (195.79s)

                                                
                                    
x
+
TestDockerFlags (10.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-735000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-735000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.94527525s)

                                                
                                                
-- stdout --
	* [docker-flags-735000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-735000" primary control-plane node in "docker-flags-735000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-735000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:30:15.483519    4891 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:30:15.483651    4891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:15.483655    4891 out.go:358] Setting ErrFile to fd 2...
	I0816 10:30:15.483657    4891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:15.483785    4891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:30:15.484838    4891 out.go:352] Setting JSON to false
	I0816 10:30:15.501143    4891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3578,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:30:15.501212    4891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:30:15.506946    4891 out.go:177] * [docker-flags-735000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:30:15.513856    4891 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:30:15.513916    4891 notify.go:220] Checking for updates...
	I0816 10:30:15.520879    4891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:30:15.523875    4891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:30:15.526854    4891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:30:15.529873    4891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:30:15.531392    4891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:30:15.535256    4891 config.go:182] Loaded profile config "force-systemd-flag-588000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:15.535321    4891 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:15.535376    4891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:30:15.539860    4891 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:30:15.545769    4891 start.go:297] selected driver: qemu2
	I0816 10:30:15.545776    4891 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:30:15.545783    4891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:30:15.548215    4891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:30:15.551884    4891 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:30:15.555006    4891 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0816 10:30:15.555032    4891 cni.go:84] Creating CNI manager for ""
	I0816 10:30:15.555046    4891 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:30:15.555049    4891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:30:15.555070    4891 start.go:340] cluster config:
	{Name:docker-flags-735000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-735000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:30:15.558651    4891 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:30:15.565872    4891 out.go:177] * Starting "docker-flags-735000" primary control-plane node in "docker-flags-735000" cluster
	I0816 10:30:15.569862    4891 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:30:15.569881    4891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:30:15.569893    4891 cache.go:56] Caching tarball of preloaded images
	I0816 10:30:15.569961    4891 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:30:15.569966    4891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:30:15.570036    4891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/docker-flags-735000/config.json ...
	I0816 10:30:15.570047    4891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/docker-flags-735000/config.json: {Name:mk485415c25d4d089e13b81b5b3e26285e25deac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:30:15.570259    4891 start.go:360] acquireMachinesLock for docker-flags-735000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:30:15.570292    4891 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "docker-flags-735000"
	I0816 10:30:15.570305    4891 start.go:93] Provisioning new machine with config: &{Name:docker-flags-735000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-735000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:30:15.570339    4891 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:30:15.576745    4891 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:30:15.593592    4891 start.go:159] libmachine.API.Create for "docker-flags-735000" (driver="qemu2")
	I0816 10:30:15.593619    4891 client.go:168] LocalClient.Create starting
	I0816 10:30:15.593686    4891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:30:15.593714    4891 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:15.593725    4891 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:15.593763    4891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:30:15.593786    4891 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:15.593793    4891 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:15.594142    4891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:30:15.740917    4891 main.go:141] libmachine: Creating SSH key...
	I0816 10:30:15.866201    4891 main.go:141] libmachine: Creating Disk image...
	I0816 10:30:15.866206    4891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:30:15.866372    4891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2
	I0816 10:30:15.875496    4891 main.go:141] libmachine: STDOUT: 
	I0816 10:30:15.875516    4891 main.go:141] libmachine: STDERR: 
	I0816 10:30:15.875560    4891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2 +20000M
	I0816 10:30:15.883507    4891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:30:15.883526    4891 main.go:141] libmachine: STDERR: 
	I0816 10:30:15.883542    4891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2
	I0816 10:30:15.883547    4891 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:30:15.883556    4891 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:30:15.883590    4891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:a3:f3:b0:b8:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2
	I0816 10:30:15.885183    4891 main.go:141] libmachine: STDOUT: 
	I0816 10:30:15.885202    4891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:30:15.885220    4891 client.go:171] duration metric: took 291.604208ms to LocalClient.Create
	I0816 10:30:17.887352    4891 start.go:128] duration metric: took 2.317042708s to createHost
	I0816 10:30:17.887389    4891 start.go:83] releasing machines lock for "docker-flags-735000", held for 2.317136667s
	W0816 10:30:17.887462    4891 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:17.914623    4891 out.go:177] * Deleting "docker-flags-735000" in qemu2 ...
	W0816 10:30:17.937689    4891 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:17.937707    4891 start.go:729] Will try again in 5 seconds ...
	I0816 10:30:22.939759    4891 start.go:360] acquireMachinesLock for docker-flags-735000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:30:22.955843    4891 start.go:364] duration metric: took 15.9585ms to acquireMachinesLock for "docker-flags-735000"
	I0816 10:30:22.955967    4891 start.go:93] Provisioning new machine with config: &{Name:docker-flags-735000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-735000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:30:22.956235    4891 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:30:22.965462    4891 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:30:23.015116    4891 start.go:159] libmachine.API.Create for "docker-flags-735000" (driver="qemu2")
	I0816 10:30:23.015160    4891 client.go:168] LocalClient.Create starting
	I0816 10:30:23.015271    4891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:30:23.015329    4891 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:23.015343    4891 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:23.015409    4891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:30:23.015458    4891 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:23.015469    4891 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:23.015992    4891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:30:23.238111    4891 main.go:141] libmachine: Creating SSH key...
	I0816 10:30:23.328955    4891 main.go:141] libmachine: Creating Disk image...
	I0816 10:30:23.328961    4891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:30:23.329119    4891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2
	I0816 10:30:23.338260    4891 main.go:141] libmachine: STDOUT: 
	I0816 10:30:23.338279    4891 main.go:141] libmachine: STDERR: 
	I0816 10:30:23.338331    4891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2 +20000M
	I0816 10:30:23.346143    4891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:30:23.346157    4891 main.go:141] libmachine: STDERR: 
	I0816 10:30:23.346168    4891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2
	I0816 10:30:23.346172    4891 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:30:23.346183    4891 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:30:23.346212    4891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:e0:74:2d:26:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/docker-flags-735000/disk.qcow2
	I0816 10:30:23.347819    4891 main.go:141] libmachine: STDOUT: 
	I0816 10:30:23.347837    4891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:30:23.347847    4891 client.go:171] duration metric: took 332.688958ms to LocalClient.Create
	I0816 10:30:25.349978    4891 start.go:128] duration metric: took 2.393735667s to createHost
	I0816 10:30:25.350034    4891 start.go:83] releasing machines lock for "docker-flags-735000", held for 2.394186833s
	W0816 10:30:25.350363    4891 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-735000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-735000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:25.363943    4891 out.go:201] 
	W0816 10:30:25.377101    4891 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:30:25.377122    4891 out.go:270] * 
	* 
	W0816 10:30:25.379278    4891 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:30:25.387863    4891 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-735000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-735000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-735000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.783417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-735000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-735000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-735000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-735000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-735000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-735000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-735000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-735000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-735000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.497291ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-735000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-735000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-735000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-735000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-735000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-735000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-16 10:30:25.525831 -0700 PDT m=+2574.769864293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-735000 -n docker-flags-735000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-735000 -n docker-flags-735000: exit status 7 (29.517958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-735000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-735000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-735000
--- FAIL: TestDockerFlags (10.18s)

                                                
                                    
x
+
TestForceSystemdFlag (10.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-588000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-588000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.976969166s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-588000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-588000" primary control-plane node in "force-systemd-flag-588000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-588000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:30:10.365152    4869 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:30:10.365352    4869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:10.365355    4869 out.go:358] Setting ErrFile to fd 2...
	I0816 10:30:10.365357    4869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:10.365481    4869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:30:10.366542    4869 out.go:352] Setting JSON to false
	I0816 10:30:10.382470    4869 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3573,"bootTime":1723825837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:30:10.382654    4869 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:30:10.388329    4869 out.go:177] * [force-systemd-flag-588000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:30:10.395458    4869 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:30:10.395491    4869 notify.go:220] Checking for updates...
	I0816 10:30:10.405412    4869 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:30:10.409465    4869 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:30:10.412482    4869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:30:10.413947    4869 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:30:10.417406    4869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:30:10.420790    4869 config.go:182] Loaded profile config "force-systemd-env-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:10.420866    4869 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:10.420922    4869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:30:10.425247    4869 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:30:10.432456    4869 start.go:297] selected driver: qemu2
	I0816 10:30:10.432465    4869 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:30:10.432473    4869 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:30:10.434741    4869 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:30:10.438485    4869 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:30:10.441527    4869 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 10:30:10.441550    4869 cni.go:84] Creating CNI manager for ""
	I0816 10:30:10.441561    4869 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:30:10.441574    4869 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:30:10.441614    4869 start.go:340] cluster config:
	{Name:force-systemd-flag-588000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:30:10.445412    4869 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:30:10.452400    4869 out.go:177] * Starting "force-systemd-flag-588000" primary control-plane node in "force-systemd-flag-588000" cluster
	I0816 10:30:10.456397    4869 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:30:10.456411    4869 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:30:10.456421    4869 cache.go:56] Caching tarball of preloaded images
	I0816 10:30:10.456477    4869 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:30:10.456483    4869 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:30:10.456534    4869 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/force-systemd-flag-588000/config.json ...
	I0816 10:30:10.456545    4869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/force-systemd-flag-588000/config.json: {Name:mk3b7af0319ee3ce1ee4fbc3c6a8ad1e888a35d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:30:10.456936    4869 start.go:360] acquireMachinesLock for force-systemd-flag-588000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:30:10.456972    4869 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "force-systemd-flag-588000"
	I0816 10:30:10.456986    4869 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-588000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:30:10.457014    4869 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:30:10.461378    4869 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:30:10.479315    4869 start.go:159] libmachine.API.Create for "force-systemd-flag-588000" (driver="qemu2")
	I0816 10:30:10.479344    4869 client.go:168] LocalClient.Create starting
	I0816 10:30:10.479411    4869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:30:10.479440    4869 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:10.479456    4869 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:10.479492    4869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:30:10.479516    4869 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:10.479526    4869 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:10.479933    4869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:30:10.626380    4869 main.go:141] libmachine: Creating SSH key...
	I0816 10:30:10.724633    4869 main.go:141] libmachine: Creating Disk image...
	I0816 10:30:10.724638    4869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:30:10.724809    4869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2
	I0816 10:30:10.733972    4869 main.go:141] libmachine: STDOUT: 
	I0816 10:30:10.733991    4869 main.go:141] libmachine: STDERR: 
	I0816 10:30:10.734042    4869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2 +20000M
	I0816 10:30:10.741890    4869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:30:10.741904    4869 main.go:141] libmachine: STDERR: 
	I0816 10:30:10.741920    4869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2
	I0816 10:30:10.741931    4869 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:30:10.741946    4869 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:30:10.741971    4869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:41:62:e1:90:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2
	I0816 10:30:10.743611    4869 main.go:141] libmachine: STDOUT: 
	I0816 10:30:10.743630    4869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:30:10.743648    4869 client.go:171] duration metric: took 264.304583ms to LocalClient.Create
	I0816 10:30:12.745829    4869 start.go:128] duration metric: took 2.288795125s to createHost
	I0816 10:30:12.745874    4869 start.go:83] releasing machines lock for "force-systemd-flag-588000", held for 2.288938292s
	W0816 10:30:12.745932    4869 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:12.765094    4869 out.go:177] * Deleting "force-systemd-flag-588000" in qemu2 ...
	W0816 10:30:12.789426    4869 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:12.789447    4869 start.go:729] Will try again in 5 seconds ...
	I0816 10:30:17.791568    4869 start.go:360] acquireMachinesLock for force-systemd-flag-588000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:30:17.887560    4869 start.go:364] duration metric: took 95.834375ms to acquireMachinesLock for "force-systemd-flag-588000"
	I0816 10:30:17.887681    4869 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-588000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-588000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:30:17.887949    4869 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:30:17.903538    4869 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:30:17.954239    4869 start.go:159] libmachine.API.Create for "force-systemd-flag-588000" (driver="qemu2")
	I0816 10:30:17.954281    4869 client.go:168] LocalClient.Create starting
	I0816 10:30:17.954398    4869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:30:17.954463    4869 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:17.954477    4869 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:17.954541    4869 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:30:17.954585    4869 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:17.954600    4869 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:17.955198    4869 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:30:18.128444    4869 main.go:141] libmachine: Creating SSH key...
	I0816 10:30:18.241983    4869 main.go:141] libmachine: Creating Disk image...
	I0816 10:30:18.241988    4869 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:30:18.242160    4869 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2
	I0816 10:30:18.251331    4869 main.go:141] libmachine: STDOUT: 
	I0816 10:30:18.251357    4869 main.go:141] libmachine: STDERR: 
	I0816 10:30:18.251403    4869 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2 +20000M
	I0816 10:30:18.259245    4869 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:30:18.259266    4869 main.go:141] libmachine: STDERR: 
	I0816 10:30:18.259277    4869 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2
	I0816 10:30:18.259281    4869 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:30:18.259287    4869 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:30:18.259313    4869 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:c2:d3:7c:69:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-flag-588000/disk.qcow2
	I0816 10:30:18.261055    4869 main.go:141] libmachine: STDOUT: 
	I0816 10:30:18.261074    4869 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:30:18.261085    4869 client.go:171] duration metric: took 306.805125ms to LocalClient.Create
	I0816 10:30:20.263208    4869 start.go:128] duration metric: took 2.375283917s to createHost
	I0816 10:30:20.263259    4869 start.go:83] releasing machines lock for "force-systemd-flag-588000", held for 2.375722875s
	W0816 10:30:20.263697    4869 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-588000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-588000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:20.276122    4869 out.go:201] 
	W0816 10:30:20.288265    4869 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:30:20.288304    4869 out.go:270] * 
	* 
	W0816 10:30:20.291048    4869 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:30:20.304082    4869 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-588000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-588000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-588000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.378584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-588000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-588000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-588000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-16 10:30:20.397146 -0700 PDT m=+2569.641069459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-588000 -n force-systemd-flag-588000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-588000 -n force-systemd-flag-588000: exit status 7 (34.2515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-588000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-588000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-588000
--- FAIL: TestForceSystemdFlag (10.18s)

                                                
                                    
x
+
TestForceSystemdEnv (10.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-552000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-552000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.649111792s)

                                                
                                                
-- stdout --
	* [force-systemd-env-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-552000" primary control-plane node in "force-systemd-env-552000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:30:04.649232    4825 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:30:04.649346    4825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:04.649350    4825 out.go:358] Setting ErrFile to fd 2...
	I0816 10:30:04.649352    4825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:30:04.649485    4825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:30:04.650778    4825 out.go:352] Setting JSON to false
	I0816 10:30:04.676736    4825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3567,"bootTime":1723825837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:30:04.676803    4825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:30:04.681740    4825 out.go:177] * [force-systemd-env-552000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:30:04.689776    4825 notify.go:220] Checking for updates...
	I0816 10:30:04.694722    4825 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:30:04.701737    4825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:30:04.708717    4825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:30:04.716745    4825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:30:04.724721    4825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:30:04.732647    4825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0816 10:30:04.737021    4825 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:30:04.737065    4825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:30:04.739720    4825 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:30:04.746772    4825 start.go:297] selected driver: qemu2
	I0816 10:30:04.746778    4825 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:30:04.746784    4825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:30:04.749095    4825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:30:04.752633    4825 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:30:04.756764    4825 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 10:30:04.756806    4825 cni.go:84] Creating CNI manager for ""
	I0816 10:30:04.756818    4825 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:30:04.756831    4825 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:30:04.756873    4825 start.go:340] cluster config:
	{Name:force-systemd-env-552000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:30:04.760458    4825 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:30:04.765731    4825 out.go:177] * Starting "force-systemd-env-552000" primary control-plane node in "force-systemd-env-552000" cluster
	I0816 10:30:04.769763    4825 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:30:04.769776    4825 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:30:04.769790    4825 cache.go:56] Caching tarball of preloaded images
	I0816 10:30:04.769850    4825 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:30:04.769856    4825 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:30:04.769921    4825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/force-systemd-env-552000/config.json ...
	I0816 10:30:04.769932    4825 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/force-systemd-env-552000/config.json: {Name:mk9bab23026f971248589190c0e5b5206ede6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:30:04.770127    4825 start.go:360] acquireMachinesLock for force-systemd-env-552000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:30:04.770162    4825 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "force-systemd-env-552000"
	I0816 10:30:04.770174    4825 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:30:04.770200    4825 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:30:04.778707    4825 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:30:04.795994    4825 start.go:159] libmachine.API.Create for "force-systemd-env-552000" (driver="qemu2")
	I0816 10:30:04.796093    4825 client.go:168] LocalClient.Create starting
	I0816 10:30:04.796158    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:30:04.796191    4825 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:04.796199    4825 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:04.796237    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:30:04.796265    4825 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:04.796277    4825 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:04.796617    4825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:30:04.946271    4825 main.go:141] libmachine: Creating SSH key...
	I0816 10:30:05.043811    4825 main.go:141] libmachine: Creating Disk image...
	I0816 10:30:05.043820    4825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:30:05.043982    4825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2
	I0816 10:30:05.053763    4825 main.go:141] libmachine: STDOUT: 
	I0816 10:30:05.053787    4825 main.go:141] libmachine: STDERR: 
	I0816 10:30:05.053841    4825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2 +20000M
	I0816 10:30:05.062223    4825 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:30:05.062238    4825 main.go:141] libmachine: STDERR: 
	I0816 10:30:05.062252    4825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2
	I0816 10:30:05.062255    4825 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:30:05.062269    4825 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:30:05.062297    4825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:e6:a2:e0:74:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2
	I0816 10:30:05.063952    4825 main.go:141] libmachine: STDOUT: 
	I0816 10:30:05.063969    4825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:30:05.063997    4825 client.go:171] duration metric: took 267.895917ms to LocalClient.Create
	I0816 10:30:07.066054    4825 start.go:128] duration metric: took 2.295896083s to createHost
	I0816 10:30:07.066076    4825 start.go:83] releasing machines lock for "force-systemd-env-552000", held for 2.295958916s
	W0816 10:30:07.066099    4825 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:07.071782    4825 out.go:177] * Deleting "force-systemd-env-552000" in qemu2 ...
	W0816 10:30:07.081058    4825 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:07.081072    4825 start.go:729] Will try again in 5 seconds ...
	I0816 10:30:12.083167    4825 start.go:360] acquireMachinesLock for force-systemd-env-552000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:30:12.746072    4825 start.go:364] duration metric: took 662.754041ms to acquireMachinesLock for "force-systemd-env-552000"
	I0816 10:30:12.746177    4825 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-552000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-552000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:30:12.746482    4825 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:30:12.759059    4825 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0816 10:30:12.808680    4825 start.go:159] libmachine.API.Create for "force-systemd-env-552000" (driver="qemu2")
	I0816 10:30:12.808867    4825 client.go:168] LocalClient.Create starting
	I0816 10:30:12.808988    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:30:12.809065    4825 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:12.809110    4825 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:12.809177    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:30:12.809222    4825 main.go:141] libmachine: Decoding PEM data...
	I0816 10:30:12.809234    4825 main.go:141] libmachine: Parsing certificate...
	I0816 10:30:12.809888    4825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:30:13.089238    4825 main.go:141] libmachine: Creating SSH key...
	I0816 10:30:13.206680    4825 main.go:141] libmachine: Creating Disk image...
	I0816 10:30:13.206686    4825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:30:13.206852    4825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2
	I0816 10:30:13.216015    4825 main.go:141] libmachine: STDOUT: 
	I0816 10:30:13.216033    4825 main.go:141] libmachine: STDERR: 
	I0816 10:30:13.216098    4825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2 +20000M
	I0816 10:30:13.223905    4825 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:30:13.223928    4825 main.go:141] libmachine: STDERR: 
	I0816 10:30:13.223941    4825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2
	I0816 10:30:13.223945    4825 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:30:13.223951    4825 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:30:13.223986    4825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:ba:ac:1a:e5:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/force-systemd-env-552000/disk.qcow2
	I0816 10:30:13.225652    4825 main.go:141] libmachine: STDOUT: 
	I0816 10:30:13.225667    4825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:30:13.225679    4825 client.go:171] duration metric: took 416.815667ms to LocalClient.Create
	I0816 10:30:15.227910    4825 start.go:128] duration metric: took 2.481425667s to createHost
	I0816 10:30:15.227973    4825 start.go:83] releasing machines lock for "force-systemd-env-552000", held for 2.481907709s
	W0816 10:30:15.228272    4825 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:30:15.238848    4825 out.go:201] 
	W0816 10:30:15.244831    4825 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:30:15.244905    4825 out.go:270] * 
	* 
	W0816 10:30:15.247757    4825 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:30:15.255764    4825 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-552000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-552000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-552000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.080083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-552000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-552000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-552000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-16 10:30:15.348946 -0700 PDT m=+2564.592761751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-552000 -n force-systemd-env-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-552000 -n force-systemd-env-552000: exit status 7 (33.405625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-552000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-552000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-552000
--- FAIL: TestForceSystemdEnv (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-435000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-435000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-mx7pd" [bcd1886c-0b71-4e40-94c6-1ec12f179b02] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-mx7pd" [bcd1886c-0b71-4e40-94c6-1ec12f179b02] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005096083s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30485
functional_test.go:1661: error fetching http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
2024/08/16 09:59:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1661: error fetching http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30485: Get "http://192.168.105.4:30485": dial tcp 192.168.105.4:30485: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-435000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-mx7pd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-435000/192.168.105.4
Start Time:       Fri, 16 Aug 2024 09:58:51 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://133a6928899269efc49e080f6adce9021a7b8a6b3f72b1ae8bbf8414e5beffe5
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 16 Aug 2024 09:59:09 -0700
Finished:     Fri, 16 Aug 2024 09:59:09 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvs6h (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hvs6h:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  34s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-mx7pd to functional-435000
Normal   Pulling    34s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     30s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.356s (3.356s including waiting). Image size: 84957542 bytes.
Normal   Created    16s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 30s)  kubelet            Started container echoserver-arm
Normal   Pulled     16s (x2 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    4s (x4 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-mx7pd_default(bcd1886c-0b71-4e40-94c6-1ec12f179b02)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-435000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-435000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.62.241
IPs:                      10.98.62.241
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30485/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-435000 -n functional-435000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                       Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-435000                                                                                              | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:58 PDT |                     |
	|                | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup880919694/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                            |                   |         |         |                     |                     |
	| ssh            | functional-435000 ssh findmnt                                                                                     | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:58 PDT | 16 Aug 24 09:58 PDT |
	|                | -T /mount1                                                                                                        |                   |         |         |                     |                     |
	| ssh            | functional-435000 ssh findmnt                                                                                     | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:58 PDT | 16 Aug 24 09:58 PDT |
	|                | -T /mount2                                                                                                        |                   |         |         |                     |                     |
	| ssh            | functional-435000 ssh findmnt                                                                                     | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:58 PDT | 16 Aug 24 09:58 PDT |
	|                | -T /mount3                                                                                                        |                   |         |         |                     |                     |
	| mount          | -p functional-435000                                                                                              | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:58 PDT |                     |
	|                | --kill=true                                                                                                       |                   |         |         |                     |                     |
	| service        | functional-435000 service                                                                                         | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | hello-node-connect --url                                                                                          |                   |         |         |                     |                     |
	| service        | functional-435000 service list                                                                                    | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	| service        | functional-435000 service list                                                                                    | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | -o json                                                                                                           |                   |         |         |                     |                     |
	| service        | functional-435000 service                                                                                         | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | --namespace=default --https                                                                                       |                   |         |         |                     |                     |
	|                | --url hello-node                                                                                                  |                   |         |         |                     |                     |
	| service        | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | service hello-node --url                                                                                          |                   |         |         |                     |                     |
	|                | --format={{.IP}}                                                                                                  |                   |         |         |                     |                     |
	| service        | functional-435000 service                                                                                         | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | hello-node --url                                                                                                  |                   |         |         |                     |                     |
	| start          | -p functional-435000                                                                                              | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT |                     |
	|                | --dry-run --memory                                                                                                |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                           |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                    |                   |         |         |                     |                     |
	| start          | -p functional-435000                                                                                              | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT |                     |
	|                | --dry-run --memory                                                                                                |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                           |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                    |                   |         |         |                     |                     |
	| start          | -p functional-435000 --dry-run                                                                                    | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT |                     |
	|                | --alsologtostderr -v=1                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                    |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                                                                | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | -p functional-435000                                                                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                            |                   |         |         |                     |                     |
	| image          | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | image ls --format short                                                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                 |                   |         |         |                     |                     |
	| image          | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | image ls --format yaml                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-435000 ssh pgrep                                                                                       | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT |                     |
	|                | buildkitd                                                                                                         |                   |         |         |                     |                     |
	| image          | functional-435000 image build -t                                                                                  | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | localhost/my-image:functional-435000                                                                              |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                  |                   |         |         |                     |                     |
	| image          | functional-435000 image ls                                                                                        | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	| image          | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | image ls --format json                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                 |                   |         |         |                     |                     |
	| image          | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | image ls --format table                                                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                 |                   |         |         |                     |                     |
	| update-context | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | update-context                                                                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                            |                   |         |         |                     |                     |
	| update-context | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | update-context                                                                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                            |                   |         |         |                     |                     |
	| update-context | functional-435000                                                                                                 | functional-435000 | jenkins | v1.33.1 | 16 Aug 24 09:59 PDT | 16 Aug 24 09:59 PDT |
	|                | update-context                                                                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                            |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 09:59:02
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 09:59:02.851099    2915 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:59:02.851239    2915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:59:02.851243    2915 out.go:358] Setting ErrFile to fd 2...
	I0816 09:59:02.851245    2915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:59:02.851399    2915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 09:59:02.852437    2915 out.go:352] Setting JSON to false
	I0816 09:59:02.868663    2915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1705,"bootTime":1723825837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 09:59:02.868730    2915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:59:02.872715    2915 out.go:177] * [functional-435000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 09:59:02.880783    2915 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 09:59:02.880842    2915 notify.go:220] Checking for updates...
	I0816 09:59:02.887744    2915 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 09:59:02.891818    2915 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 09:59:02.894768    2915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:59:02.897797    2915 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 09:59:02.900821    2915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 09:59:02.903982    2915 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 09:59:02.904236    2915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:59:02.907785    2915 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 09:59:02.914798    2915 start.go:297] selected driver: qemu2
	I0816 09:59:02.914805    2915 start.go:901] validating driver "qemu2" against &{Name:functional-435000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-435000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:59:02.914880    2915 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 09:59:02.917117    2915 cni.go:84] Creating CNI manager for ""
	I0816 09:59:02.917132    2915 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 09:59:02.917181    2915 start.go:340] cluster config:
	{Name:functional-435000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-435000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:59:02.927601    2915 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Aug 16 16:59:05 functional-435000 dockerd[6082]: time="2024-08-16T16:59:05.807455594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:05 functional-435000 dockerd[6082]: time="2024-08-16T16:59:05.807539552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:05 functional-435000 dockerd[6075]: time="2024-08-16T16:59:05.977074685Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Aug 16 16:59:09 functional-435000 dockerd[6082]: time="2024-08-16T16:59:09.370630125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 16:59:09 functional-435000 dockerd[6082]: time="2024-08-16T16:59:09.370898207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 16:59:09 functional-435000 dockerd[6082]: time="2024-08-16T16:59:09.370925040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:09 functional-435000 dockerd[6082]: time="2024-08-16T16:59:09.371006539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:09 functional-435000 dockerd[6075]: time="2024-08-16T16:59:09.410678741Z" level=info msg="ignoring event" container=133a6928899269efc49e080f6adce9021a7b8a6b3f72b1ae8bbf8414e5beffe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 16:59:09 functional-435000 dockerd[6082]: time="2024-08-16T16:59:09.410752115Z" level=info msg="shim disconnected" id=133a6928899269efc49e080f6adce9021a7b8a6b3f72b1ae8bbf8414e5beffe5 namespace=moby
	Aug 16 16:59:09 functional-435000 dockerd[6082]: time="2024-08-16T16:59:09.410778157Z" level=warning msg="cleaning up after shim disconnected" id=133a6928899269efc49e080f6adce9021a7b8a6b3f72b1ae8bbf8414e5beffe5 namespace=moby
	Aug 16 16:59:09 functional-435000 dockerd[6082]: time="2024-08-16T16:59:09.410782407Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 16:59:10 functional-435000 cri-dockerd[6336]: time="2024-08-16T16:59:10Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Aug 16 16:59:10 functional-435000 dockerd[6082]: time="2024-08-16T16:59:10.603551232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 16:59:10 functional-435000 dockerd[6082]: time="2024-08-16T16:59:10.603596357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 16:59:10 functional-435000 dockerd[6082]: time="2024-08-16T16:59:10.603758981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:10 functional-435000 dockerd[6082]: time="2024-08-16T16:59:10.603848730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:11 functional-435000 dockerd[6082]: time="2024-08-16T16:59:11.255429506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 16 16:59:11 functional-435000 dockerd[6082]: time="2024-08-16T16:59:11.255461839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 16 16:59:11 functional-435000 dockerd[6082]: time="2024-08-16T16:59:11.255470881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:11 functional-435000 dockerd[6082]: time="2024-08-16T16:59:11.255502964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 16:59:11 functional-435000 dockerd[6082]: time="2024-08-16T16:59:11.278902538Z" level=info msg="shim disconnected" id=05d964c9afd04fc699018e85c38f64b4bff5344537f15bbf4a9431cf7bb55c82 namespace=moby
	Aug 16 16:59:11 functional-435000 dockerd[6082]: time="2024-08-16T16:59:11.278933038Z" level=warning msg="cleaning up after shim disconnected" id=05d964c9afd04fc699018e85c38f64b4bff5344537f15bbf4a9431cf7bb55c82 namespace=moby
	Aug 16 16:59:11 functional-435000 dockerd[6082]: time="2024-08-16T16:59:11.278937413Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 16 16:59:11 functional-435000 dockerd[6075]: time="2024-08-16T16:59:11.279076370Z" level=info msg="ignoring event" container=05d964c9afd04fc699018e85c38f64b4bff5344537f15bbf4a9431cf7bb55c82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 16 16:59:15 functional-435000 dockerd[6075]: 2024/08/16 16:59:15 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	05d964c9afd04       72565bf5bbedf                                                                                          14 seconds ago       Exited              echoserver-arm              2                   6b7e82d6327d1       hello-node-64b4f8f9ff-chfd8
	de1afa4596b3e       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         15 seconds ago       Running             kubernetes-dashboard        0                   a98147e53277a       kubernetes-dashboard-695b96c756-jhjsb
	133a692889926       72565bf5bbedf                                                                                          16 seconds ago       Exited              echoserver-arm              2                   774cb6cfc2dfe       hello-node-connect-65d86f57f4-mx7pd
	22c4146adc900       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   20 seconds ago       Running             dashboard-metrics-scraper   0                   1ab892577c579       dashboard-metrics-scraper-c5db448b4-h4bsl
	6a751f818b1db       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    36 seconds ago       Exited              mount-munger                0                   bf5b5934930a3       busybox-mount
	fc022f6ed04b8       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add                          40 seconds ago       Running             myfrontend                  0                   72ca9306c78db       sp-pod
	91a6d0452b941       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                          45 seconds ago       Running             nginx                       0                   280c61694e258       nginx-svc
	d6c53903fb937       2437cf7621777                                                                                          About a minute ago   Running             coredns                     2                   8621b13138027       coredns-6f6b679f8f-qcg27
	2fd306cd7284c       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         3                   3b3c085e10b89       storage-provisioner
	59f4deaac8f5b       71d55d66fd4ee                                                                                          About a minute ago   Running             kube-proxy                  2                   570449dc8c22f       kube-proxy-wd848
	d622810ef05fa       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   10d9254b846cd       etcd-functional-435000
	0b30b30bb31b5       fcb0683e6bdbd                                                                                          About a minute ago   Running             kube-controller-manager     2                   7a72e5a4a1b2c       kube-controller-manager-functional-435000
	9fe37defa6867       fbbbd428abb4d                                                                                          About a minute ago   Running             kube-scheduler              2                   cea4bfc2b0c9c       kube-scheduler-functional-435000
	7eb378ae88884       cd0f0ae0ec9e0                                                                                          About a minute ago   Running             kube-apiserver              0                   d1838234f64b8       kube-apiserver-functional-435000
	d39e4aa9dffae       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         2                   ed7ab88a44214       storage-provisioner
	6a30dd1d1b4ea       2437cf7621777                                                                                          2 minutes ago        Exited              coredns                     1                   32362a1184897       coredns-6f6b679f8f-qcg27
	44871d4129aa9       71d55d66fd4ee                                                                                          2 minutes ago        Exited              kube-proxy                  1                   b1d2a84620d31       kube-proxy-wd848
	e6382a55b2c00       fcb0683e6bdbd                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   61de7d4b3fbb3       kube-controller-manager-functional-435000
	15bddac460207       27e3830e14027                                                                                          2 minutes ago        Exited              etcd                        1                   155fe8a3c09ef       etcd-functional-435000
	55f1379f0e825       fbbbd428abb4d                                                                                          2 minutes ago        Exited              kube-scheduler              1                   fda78474d34d9       kube-scheduler-functional-435000
	
	
	==> coredns [6a30dd1d1b4e] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57709 - 56979 "HINFO IN 2080092990044321976.4246004887781304668. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009675928s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d6c53903fb93] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43922 - 15577 "HINFO IN 7605804219772577317.8045785248311112217. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010165141s
	[INFO] 10.244.0.1:40940 - 58369 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000101874s
	[INFO] 10.244.0.1:9752 - 36378 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096458s
	[INFO] 10.244.0.1:9882 - 36734 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000031625s
	[INFO] 10.244.0.1:47553 - 21797 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001016328s
	[INFO] 10.244.0.1:1745 - 47397 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000062541s
	[INFO] 10.244.0.1:55909 - 56883 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000119875s
	
	
	==> describe nodes <==
	Name:               functional-435000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-435000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=functional-435000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T09_56_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 16:56:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-435000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 16:59:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 16:59:02 +0000   Fri, 16 Aug 2024 16:56:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 16:59:02 +0000   Fri, 16 Aug 2024 16:56:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 16:59:02 +0000   Fri, 16 Aug 2024 16:56:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 16:59:02 +0000   Fri, 16 Aug 2024 16:56:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-435000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 056f8d91f24f4a0297624b02e0a0ad68
	  System UUID:                056f8d91f24f4a0297624b02e0a0ad68
	  Boot ID:                    6ec7e421-e166-4396-b745-0a3c1e516120
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-chfd8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  default                     hello-node-connect-65d86f57f4-mx7pd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 coredns-6f6b679f8f-qcg27                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m8s
	  kube-system                 etcd-functional-435000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m13s
	  kube-system                 kube-apiserver-functional-435000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-functional-435000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m13s
	  kube-system                 kube-proxy-wd848                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 kube-scheduler-functional-435000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m13s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-h4bsl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-jhjsb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  Starting                 84s                    kube-proxy       
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m13s                  kubelet          Node functional-435000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m13s                  kubelet          Node functional-435000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s                  kubelet          Node functional-435000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m13s                  kubelet          Starting kubelet.
	  Normal  NodeReady                3m10s                  kubelet          Node functional-435000 status is now: NodeReady
	  Normal  RegisteredNode           3m9s                   node-controller  Node functional-435000 event: Registered Node functional-435000 in Controller
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node functional-435000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node functional-435000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node functional-435000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m5s                   node-controller  Node functional-435000 event: Registered Node functional-435000 in Controller
	  Normal  Starting                 87s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)      kubelet          Node functional-435000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)      kubelet          Node functional-435000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)      kubelet          Node functional-435000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  87s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           81s                    node-controller  Node functional-435000 event: Registered Node functional-435000 in Controller
	
	
	==> dmesg <==
	[ +11.409754] systemd-fstab-generator[5171]: Ignoring "noauto" option for root device
	[ +10.727039] systemd-fstab-generator[5597]: Ignoring "noauto" option for root device
	[  +0.053770] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.103750] systemd-fstab-generator[5631]: Ignoring "noauto" option for root device
	[  +0.095032] systemd-fstab-generator[5657]: Ignoring "noauto" option for root device
	[  +0.104737] systemd-fstab-generator[5671]: Ignoring "noauto" option for root device
	[  +5.108572] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.393435] systemd-fstab-generator[6289]: Ignoring "noauto" option for root device
	[  +0.087508] systemd-fstab-generator[6301]: Ignoring "noauto" option for root device
	[  +0.097395] systemd-fstab-generator[6313]: Ignoring "noauto" option for root device
	[  +0.106796] systemd-fstab-generator[6328]: Ignoring "noauto" option for root device
	[  +0.229405] systemd-fstab-generator[6499]: Ignoring "noauto" option for root device
	[  +1.327845] systemd-fstab-generator[6623]: Ignoring "noauto" option for root device
	[Aug16 16:58] kauditd_printk_skb: 199 callbacks suppressed
	[ +16.114371] systemd-fstab-generator[7646]: Ignoring "noauto" option for root device
	[  +0.058193] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.933091] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.295988] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.005242] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.463906] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.185141] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.034967] kauditd_printk_skb: 40 callbacks suppressed
	[Aug16 16:59] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.887000] kauditd_printk_skb: 27 callbacks suppressed
	[  +7.429794] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [15bddac46020] <==
	{"level":"info","ts":"2024-08-16T16:57:17.022825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-16T16:57:17.022891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-16T16:57:17.022924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-16T16:57:17.022942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-16T16:57:17.023011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-16T16:57:17.023057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-16T16:57:17.028086Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-435000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T16:57:17.028472Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T16:57:17.029241Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T16:57:17.031363Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T16:57:17.031659Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T16:57:17.031730Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T16:57:17.032695Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T16:57:17.034401Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-16T16:57:17.035475Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T16:57:43.804907Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-16T16:57:43.804926Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-435000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-16T16:57:43.804957Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T16:57:43.804994Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T16:57:43.829336Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-16T16:57:43.829376Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-16T16:57:43.829398Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-16T16:57:43.836228Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-16T16:57:43.836312Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-16T16:57:43.836321Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-435000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [d622810ef05f] <==
	{"level":"info","ts":"2024-08-16T16:57:58.925169Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T16:57:58.925208Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T16:57:58.925228Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-16T16:57:58.925333Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-16T16:57:58.925352Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-16T16:57:58.926244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-16T16:57:58.926306Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-16T16:57:58.926370Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T16:57:58.926398Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T16:58:00.420838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-16T16:58:00.421023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-16T16:58:00.421092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-16T16:58:00.421469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-16T16:58:00.421490Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-16T16:58:00.421518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-16T16:58:00.421579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-16T16:58:00.426173Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T16:58:00.426505Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T16:58:00.426762Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T16:58:00.426809Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T16:58:00.426060Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-435000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T16:58:00.428380Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T16:58:00.428523Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T16:58:00.431102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-16T16:58:00.431449Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 16:59:25 up 3 min,  0 users,  load average: 0.74, 0.41, 0.17
	Linux functional-435000 5.10.207 #1 SMP PREEMPT Thu Aug 15 18:35:44 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7eb378ae8888] <==
	I0816 16:58:01.028423       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0816 16:58:01.036565       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0816 16:58:01.036604       1 policy_source.go:224] refreshing policies
	I0816 16:58:01.038728       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0816 16:58:01.038757       1 aggregator.go:171] initial CRD sync complete...
	I0816 16:58:01.038765       1 autoregister_controller.go:144] Starting autoregister controller
	I0816 16:58:01.038767       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0816 16:58:01.038769       1 cache.go:39] Caches are synced for autoregister controller
	I0816 16:58:01.078718       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 16:58:01.925258       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 16:58:02.266469       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0816 16:58:02.270880       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0816 16:58:02.285439       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0816 16:58:02.301335       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 16:58:02.306060       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 16:58:04.476501       1 controller.go:615] quota admission added evaluator for: endpoints
	I0816 16:58:04.624286       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 16:58:21.389481       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.230.27"}
	I0816 16:58:31.889231       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.161.138"}
	I0816 16:58:51.135497       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0816 16:58:51.177140       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.62.241"}
	I0816 16:58:53.201534       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.80.224"}
	I0816 16:59:03.393542       1 controller.go:615] quota admission added evaluator for: namespaces
	I0816 16:59:03.501837       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.195.14"}
	I0816 16:59:03.514866       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.33.135"}
	
	
	==> kube-controller-manager [0b30b30bb31b] <==
	I0816 16:59:03.423211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.008093ms"
	E0816 16:59:03.423443       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0816 16:59:03.427940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.793809ms"
	E0816 16:59:03.428211       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0816 16:59:03.428245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.853607ms"
	E0816 16:59:03.428266       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0816 16:59:03.432407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.311444ms"
	E0816 16:59:03.432427       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0816 16:59:03.433993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="1.486033ms"
	E0816 16:59:03.434009       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0816 16:59:03.445727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.006397ms"
	I0816 16:59:03.487803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="41.953536ms"
	I0816 16:59:03.488731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="33.272339ms"
	I0816 16:59:03.495273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.52521ms"
	I0816 16:59:03.495304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.791µs"
	I0816 16:59:03.499412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.581971ms"
	I0816 16:59:03.499550       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="34.083µs"
	I0816 16:59:06.186454       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.447216ms"
	I0816 16:59:06.186484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.083µs"
	I0816 16:59:10.207466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.709µs"
	I0816 16:59:11.240925       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.90488ms"
	I0816 16:59:11.241388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.708µs"
	I0816 16:59:12.276512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="66.749µs"
	I0816 16:59:21.215280       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="72.625µs"
	I0816 16:59:23.216894       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="110.166µs"
	
	
	==> kube-controller-manager [e6382a55b2c0] <==
	I0816 16:57:20.900674       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0816 16:57:20.901809       1 shared_informer.go:320] Caches are synced for persistent volume
	I0816 16:57:20.901829       1 shared_informer.go:320] Caches are synced for TTL
	I0816 16:57:20.902903       1 shared_informer.go:320] Caches are synced for crt configmap
	I0816 16:57:20.924985       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0816 16:57:20.925016       1 shared_informer.go:320] Caches are synced for daemon sets
	I0816 16:57:20.925028       1 shared_informer.go:320] Caches are synced for deployment
	I0816 16:57:20.925240       1 shared_informer.go:320] Caches are synced for cronjob
	I0816 16:57:20.925002       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0816 16:57:20.925275       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0816 16:57:20.925337       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0816 16:57:20.926333       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0816 16:57:20.926370       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0816 16:57:20.926722       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0816 16:57:20.929554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="54.935463ms"
	I0816 16:57:20.929635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="17.125µs"
	I0816 16:57:21.094760       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 16:57:21.095877       1 shared_informer.go:320] Caches are synced for disruption
	I0816 16:57:21.132053       1 shared_informer.go:320] Caches are synced for resource quota
	I0816 16:57:21.174836       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0816 16:57:21.538242       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 16:57:21.625356       1 shared_informer.go:320] Caches are synced for garbage collector
	I0816 16:57:21.625525       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0816 16:57:29.794971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="10.118595ms"
	I0816 16:57:29.795352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="40.458µs"
	
	
	==> kube-proxy [44871d4129aa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 16:57:18.745395       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 16:57:18.756948       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0816 16:57:18.757024       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 16:57:18.788800       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 16:57:18.788825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 16:57:18.788841       1 server_linux.go:169] "Using iptables Proxier"
	I0816 16:57:18.789735       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 16:57:18.790062       1 server.go:483] "Version info" version="v1.31.0"
	I0816 16:57:18.790118       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 16:57:18.790736       1 config.go:197] "Starting service config controller"
	I0816 16:57:18.790740       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 16:57:18.790749       1 config.go:104] "Starting endpoint slice config controller"
	I0816 16:57:18.790751       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 16:57:18.791004       1 config.go:326] "Starting node config controller"
	I0816 16:57:18.791561       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 16:57:18.795542       1 shared_informer.go:320] Caches are synced for node config
	I0816 16:57:18.891703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 16:57:18.891710       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [59f4deaac8f5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0816 16:58:01.728215       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0816 16:58:01.733870       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0816 16:58:01.733898       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 16:58:01.741744       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0816 16:58:01.741810       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0816 16:58:01.741851       1 server_linux.go:169] "Using iptables Proxier"
	I0816 16:58:01.742525       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 16:58:01.742638       1 server.go:483] "Version info" version="v1.31.0"
	I0816 16:58:01.742650       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 16:58:01.743056       1 config.go:197] "Starting service config controller"
	I0816 16:58:01.743070       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 16:58:01.743113       1 config.go:104] "Starting endpoint slice config controller"
	I0816 16:58:01.743115       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 16:58:01.743342       1 config.go:326] "Starting node config controller"
	I0816 16:58:01.743492       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 16:58:01.843664       1 shared_informer.go:320] Caches are synced for service config
	I0816 16:58:01.843685       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0816 16:58:01.843720       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [55f1379f0e82] <==
	I0816 16:57:16.113856       1 serving.go:386] Generated self-signed cert in-memory
	W0816 16:57:17.570340       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 16:57:17.570835       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 16:57:17.570860       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 16:57:17.570864       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 16:57:17.595669       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 16:57:17.595686       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 16:57:17.597281       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 16:57:17.597769       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 16:57:17.597784       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 16:57:17.597838       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 16:57:17.700129       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0816 16:57:43.812561       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9fe37defa686] <==
	I0816 16:57:59.226931       1 serving.go:386] Generated self-signed cert in-memory
	W0816 16:58:00.954375       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 16:58:00.954458       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 16:58:00.954478       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 16:58:00.954511       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 16:58:00.980379       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0816 16:58:00.982112       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 16:58:00.983301       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0816 16:58:00.986966       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0816 16:58:00.987005       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 16:58:00.987020       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0816 16:58:01.087131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 16:58:58 functional-435000 kubelet[6630]: E0816 16:58:58.090548    6630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-mx7pd_default(bcd1886c-0b71-4e40-94c6-1ec12f179b02)\"" pod="default/hello-node-connect-65d86f57f4-mx7pd" podUID="bcd1886c-0b71-4e40-94c6-1ec12f179b02"
	Aug 16 16:58:58 functional-435000 kubelet[6630]: E0816 16:58:58.200634    6630 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 16 16:58:58 functional-435000 kubelet[6630]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 16 16:58:58 functional-435000 kubelet[6630]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 16 16:58:58 functional-435000 kubelet[6630]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 16 16:58:58 functional-435000 kubelet[6630]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 16 16:58:58 functional-435000 kubelet[6630]: I0816 16:58:58.270355    6630 scope.go:117] "RemoveContainer" containerID="6554843641cbaf123f944c3087e3cd320d167e945f72c46fbc3c57a1df919527"
	Aug 16 16:59:03 functional-435000 kubelet[6630]: I0816 16:59:03.643242    6630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj78w\" (UniqueName: \"kubernetes.io/projected/a1ee5da2-af5c-41d7-b5d9-dc2adab1d0e4-kube-api-access-cj78w\") pod \"dashboard-metrics-scraper-c5db448b4-h4bsl\" (UID: \"a1ee5da2-af5c-41d7-b5d9-dc2adab1d0e4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-h4bsl"
	Aug 16 16:59:03 functional-435000 kubelet[6630]: I0816 16:59:03.643268    6630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1ee5da2-af5c-41d7-b5d9-dc2adab1d0e4-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-h4bsl\" (UID: \"a1ee5da2-af5c-41d7-b5d9-dc2adab1d0e4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-h4bsl"
	Aug 16 16:59:03 functional-435000 kubelet[6630]: I0816 16:59:03.643282    6630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/49075e4d-5465-46d7-a990-bc0c87eba10c-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-jhjsb\" (UID: \"49075e4d-5465-46d7-a990-bc0c87eba10c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-jhjsb"
	Aug 16 16:59:03 functional-435000 kubelet[6630]: I0816 16:59:03.643291    6630 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhbd\" (UniqueName: \"kubernetes.io/projected/49075e4d-5465-46d7-a990-bc0c87eba10c-kube-api-access-5nhbd\") pod \"kubernetes-dashboard-695b96c756-jhjsb\" (UID: \"49075e4d-5465-46d7-a990-bc0c87eba10c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-jhjsb"
	Aug 16 16:59:09 functional-435000 kubelet[6630]: I0816 16:59:09.195835    6630 scope.go:117] "RemoveContainer" containerID="16fcb656a0ed8fc0cc5fc2bf37c46450a98626c2311374ea5fb5013680985218"
	Aug 16 16:59:10 functional-435000 kubelet[6630]: I0816 16:59:10.200247    6630 scope.go:117] "RemoveContainer" containerID="16fcb656a0ed8fc0cc5fc2bf37c46450a98626c2311374ea5fb5013680985218"
	Aug 16 16:59:10 functional-435000 kubelet[6630]: I0816 16:59:10.200386    6630 scope.go:117] "RemoveContainer" containerID="133a6928899269efc49e080f6adce9021a7b8a6b3f72b1ae8bbf8414e5beffe5"
	Aug 16 16:59:10 functional-435000 kubelet[6630]: E0816 16:59:10.200448    6630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-mx7pd_default(bcd1886c-0b71-4e40-94c6-1ec12f179b02)\"" pod="default/hello-node-connect-65d86f57f4-mx7pd" podUID="bcd1886c-0b71-4e40-94c6-1ec12f179b02"
	Aug 16 16:59:10 functional-435000 kubelet[6630]: I0816 16:59:10.207791    6630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-h4bsl" podStartSLOduration=5.336499143 podStartE2EDuration="7.207751039s" podCreationTimestamp="2024-08-16 16:59:03 +0000 UTC" firstStartedPulling="2024-08-16 16:59:03.888183394 +0000 UTC m=+65.765680608" lastFinishedPulling="2024-08-16 16:59:05.759435291 +0000 UTC m=+67.636932504" observedRunningTime="2024-08-16 16:59:06.182015511 +0000 UTC m=+68.059512724" watchObservedRunningTime="2024-08-16 16:59:10.207751039 +0000 UTC m=+72.085248252"
	Aug 16 16:59:11 functional-435000 kubelet[6630]: I0816 16:59:11.195879    6630 scope.go:117] "RemoveContainer" containerID="a5ea41938111c150cc5a286d6754323d8335e2b5121e7a9343529c9235ad0412"
	Aug 16 16:59:12 functional-435000 kubelet[6630]: I0816 16:59:12.264994    6630 scope.go:117] "RemoveContainer" containerID="a5ea41938111c150cc5a286d6754323d8335e2b5121e7a9343529c9235ad0412"
	Aug 16 16:59:12 functional-435000 kubelet[6630]: I0816 16:59:12.265427    6630 scope.go:117] "RemoveContainer" containerID="05d964c9afd04fc699018e85c38f64b4bff5344537f15bbf4a9431cf7bb55c82"
	Aug 16 16:59:12 functional-435000 kubelet[6630]: E0816 16:59:12.265608    6630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-chfd8_default(47f772f8-ae5b-46eb-af5b-0ca0dedb72c9)\"" pod="default/hello-node-64b4f8f9ff-chfd8" podUID="47f772f8-ae5b-46eb-af5b-0ca0dedb72c9"
	Aug 16 16:59:12 functional-435000 kubelet[6630]: I0816 16:59:12.275861    6630 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-jhjsb" podStartSLOduration=2.886370773 podStartE2EDuration="9.275842065s" podCreationTimestamp="2024-08-16 16:59:03 +0000 UTC" firstStartedPulling="2024-08-16 16:59:04.121093847 +0000 UTC m=+65.998591019" lastFinishedPulling="2024-08-16 16:59:10.510565139 +0000 UTC m=+72.388062311" observedRunningTime="2024-08-16 16:59:11.23710031 +0000 UTC m=+73.114597524" watchObservedRunningTime="2024-08-16 16:59:12.275842065 +0000 UTC m=+74.153339279"
	Aug 16 16:59:21 functional-435000 kubelet[6630]: I0816 16:59:21.196561    6630 scope.go:117] "RemoveContainer" containerID="133a6928899269efc49e080f6adce9021a7b8a6b3f72b1ae8bbf8414e5beffe5"
	Aug 16 16:59:21 functional-435000 kubelet[6630]: E0816 16:59:21.197786    6630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-mx7pd_default(bcd1886c-0b71-4e40-94c6-1ec12f179b02)\"" pod="default/hello-node-connect-65d86f57f4-mx7pd" podUID="bcd1886c-0b71-4e40-94c6-1ec12f179b02"
	Aug 16 16:59:23 functional-435000 kubelet[6630]: I0816 16:59:23.196970    6630 scope.go:117] "RemoveContainer" containerID="05d964c9afd04fc699018e85c38f64b4bff5344537f15bbf4a9431cf7bb55c82"
	Aug 16 16:59:23 functional-435000 kubelet[6630]: E0816 16:59:23.197539    6630 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-chfd8_default(47f772f8-ae5b-46eb-af5b-0ca0dedb72c9)\"" pod="default/hello-node-64b4f8f9ff-chfd8" podUID="47f772f8-ae5b-46eb-af5b-0ca0dedb72c9"
	
	
	==> kubernetes-dashboard [de1afa4596b3] <==
	2024/08/16 16:59:10 Starting overwatch
	2024/08/16 16:59:10 Using namespace: kubernetes-dashboard
	2024/08/16 16:59:10 Using in-cluster config to connect to apiserver
	2024/08/16 16:59:10 Using secret token for csrf signing
	2024/08/16 16:59:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/16 16:59:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/16 16:59:10 Successful initial request to the apiserver, version: v1.31.0
	2024/08/16 16:59:10 Generating JWE encryption key
	2024/08/16 16:59:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/16 16:59:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/16 16:59:10 Initializing JWE encryption key from synchronized object
	2024/08/16 16:59:10 Creating in-cluster Sidecar client
	2024/08/16 16:59:10 Serving insecurely on HTTP port: 9090
	2024/08/16 16:59:10 Successful request to sidecar
	
	
	==> storage-provisioner [2fd306cd7284] <==
	I0816 16:58:01.701394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 16:58:01.707906       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 16:58:01.708062       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 16:58:19.119023       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 16:58:19.119246       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-435000_567b4f99-d402-4911-9ed6-d4e17d375d57!
	I0816 16:58:19.119773       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3f0a269-c5dc-4468-b6cd-a56fdf1e28eb", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-435000_567b4f99-d402-4911-9ed6-d4e17d375d57 became leader
	I0816 16:58:19.222562       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-435000_567b4f99-d402-4911-9ed6-d4e17d375d57!
	I0816 16:58:32.078277       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0816 16:58:32.078434       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    990b7041-ffb4-40ab-9ba3-ae1d2a12ac69 338 0 2024-08-16 16:56:19 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-16 16:56:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-15e3a3d7-28a7-4f2d-84e4-263be94022f2 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  15e3a3d7-28a7-4f2d-84e4-263be94022f2 685 0 2024-08-16 16:58:32 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-16 16:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-16 16:58:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0816 16:58:32.078975       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-15e3a3d7-28a7-4f2d-84e4-263be94022f2" provisioned
	I0816 16:58:32.079015       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0816 16:58:32.079036       1 volume_store.go:212] Trying to save persistentvolume "pvc-15e3a3d7-28a7-4f2d-84e4-263be94022f2"
	I0816 16:58:32.078995       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"15e3a3d7-28a7-4f2d-84e4-263be94022f2", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0816 16:58:32.083829       1 volume_store.go:219] persistentvolume "pvc-15e3a3d7-28a7-4f2d-84e4-263be94022f2" saved
	I0816 16:58:32.084099       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"15e3a3d7-28a7-4f2d-84e4-263be94022f2", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-15e3a3d7-28a7-4f2d-84e4-263be94022f2
	
	
	==> storage-provisioner [d39e4aa9dffa] <==
	I0816 16:57:31.302459       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 16:57:31.307421       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 16:57:31.307449       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-435000 -n functional-435000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-435000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-435000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-435000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-435000/192.168.105.4
	Start Time:       Fri, 16 Aug 2024 09:58:48 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://6a751f818b1db49f082e815da132b04fff502226254704b9b183e72b92369372
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 16 Aug 2024 09:58:49 -0700
	      Finished:     Fri, 16 Aug 2024 09:58:49 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xt759 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xt759:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  38s   default-scheduler  Successfully assigned default/busybox-mount to functional-435000
	  Normal  Pulling    38s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     37s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.24s (1.24s including waiting). Image size: 3547125 bytes.
	  Normal  Created    37s   kubelet            Created container mount-munger
	  Normal  Started    37s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 node stop m02 -v=7 --alsologtostderr
E0816 10:04:06.935411    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-881000 node stop m02 -v=7 --alsologtostderr: (12.195197667s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
E0816 10:04:47.898357    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:06:09.820497    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:06:19.071285    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 7 (2m55.968542833s)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-881000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-881000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:04:15.645600    3542 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:04:15.645750    3542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:04:15.645753    3542 out.go:358] Setting ErrFile to fd 2...
	I0816 10:04:15.645755    3542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:04:15.645881    3542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:04:15.646004    3542 out.go:352] Setting JSON to false
	I0816 10:04:15.646019    3542 mustload.go:65] Loading cluster: ha-881000
	I0816 10:04:15.646053    3542 notify.go:220] Checking for updates...
	I0816 10:04:15.646222    3542 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:04:15.646230    3542 status.go:255] checking status of ha-881000 ...
	I0816 10:04:15.646940    3542 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0816 10:04:15.646950    3542 host.go:66] Checking if "ha-881000" exists ...
	I0816 10:04:15.647052    3542 host.go:66] Checking if "ha-881000" exists ...
	I0816 10:04:15.647167    3542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:04:15.647176    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/id_rsa Username:docker}
	W0816 10:04:41.569435    3542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0816 10:04:41.569565    3542 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0816 10:04:41.569608    3542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0816 10:04:41.569659    3542 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 10:04:41.569677    3542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0816 10:04:41.569688    3542 status.go:255] checking status of ha-881000-m02 ...
	I0816 10:04:41.570093    3542 status.go:330] ha-881000-m02 host status = "Stopped" (err=<nil>)
	I0816 10:04:41.570103    3542 status.go:343] host is not running, skipping remaining checks
	I0816 10:04:41.570108    3542 status.go:257] ha-881000-m02 status: &{Name:ha-881000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:04:41.570118    3542 status.go:255] checking status of ha-881000-m03 ...
	I0816 10:04:41.571320    3542 status.go:330] ha-881000-m03 host status = "Running" (err=<nil>)
	I0816 10:04:41.571331    3542 host.go:66] Checking if "ha-881000-m03" exists ...
	I0816 10:04:41.571576    3542 host.go:66] Checking if "ha-881000-m03" exists ...
	I0816 10:04:41.571825    3542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:04:41.571837    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m03/id_rsa Username:docker}
	W0816 10:05:56.573365    3542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0816 10:05:56.573421    3542 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0816 10:05:56.573430    3542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0816 10:05:56.573434    3542 status.go:257] ha-881000-m03 status: &{Name:ha-881000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 10:05:56.573442    3542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0816 10:05:56.573446    3542 status.go:255] checking status of ha-881000-m04 ...
	I0816 10:05:56.574156    3542 status.go:330] ha-881000-m04 host status = "Running" (err=<nil>)
	I0816 10:05:56.574164    3542 host.go:66] Checking if "ha-881000-m04" exists ...
	I0816 10:05:56.574275    3542 host.go:66] Checking if "ha-881000-m04" exists ...
	I0816 10:05:56.574396    3542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:05:56.574402    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m04/id_rsa Username:docker}
	W0816 10:07:11.573704    3542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0816 10:07:11.573768    3542 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0816 10:07:11.573778    3542 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0816 10:07:11.573782    3542 status.go:257] ha-881000-m04 status: &{Name:ha-881000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0816 10:07:11.573792    3542 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-881000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-881000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-881000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-881000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-881000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-881000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 3 (25.962865875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 10:07:37.536535    3570 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0816 10:07:37.536544    3570 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0816 10:08:25.938024    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:08:53.661395    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.990225834s)
ha_test.go:413: expected profile "ha-881000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 3 (25.9647715s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 10:09:20.486902    3590 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0816 10:09:20.486938    3590 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (257.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.126884667s)

                                                
                                                
-- stdout --
	* Starting "ha-881000-m02" control-plane node in "ha-881000" cluster
	* Restarting existing qemu2 VM for "ha-881000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-881000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:09:20.549813    3595 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:09:20.550127    3595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:09:20.550132    3595 out.go:358] Setting ErrFile to fd 2...
	I0816 10:09:20.550135    3595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:09:20.550307    3595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:09:20.550636    3595 mustload.go:65] Loading cluster: ha-881000
	I0816 10:09:20.550951    3595 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0816 10:09:20.551236    3595 host.go:58] "ha-881000-m02" host status: Stopped
	I0816 10:09:20.555614    3595 out.go:177] * Starting "ha-881000-m02" control-plane node in "ha-881000" cluster
	I0816 10:09:20.558713    3595 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:09:20.558744    3595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:09:20.558754    3595 cache.go:56] Caching tarball of preloaded images
	I0816 10:09:20.558890    3595 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:09:20.558898    3595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:09:20.558990    3595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/ha-881000/config.json ...
	I0816 10:09:20.559356    3595 start.go:360] acquireMachinesLock for ha-881000-m02: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:09:20.559407    3595 start.go:364] duration metric: took 35.625µs to acquireMachinesLock for "ha-881000-m02"
	I0816 10:09:20.559419    3595 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:09:20.559426    3595 fix.go:54] fixHost starting: m02
	I0816 10:09:20.559559    3595 fix.go:112] recreateIfNeeded on ha-881000-m02: state=Stopped err=<nil>
	W0816 10:09:20.559566    3595 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:09:20.562678    3595 out.go:177] * Restarting existing qemu2 VM for "ha-881000-m02" ...
	I0816 10:09:20.566634    3595 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:09:20.566691    3595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4d:26:a4:cf:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/disk.qcow2
	I0816 10:09:20.569850    3595 main.go:141] libmachine: STDOUT: 
	I0816 10:09:20.569873    3595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:09:20.569901    3595 fix.go:56] duration metric: took 10.474833ms for fixHost
	I0816 10:09:20.569907    3595 start.go:83] releasing machines lock for "ha-881000-m02", held for 10.4945ms
	W0816 10:09:20.569915    3595 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:09:20.569958    3595 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:09:20.569963    3595 start.go:729] Will try again in 5 seconds ...
	I0816 10:09:25.572192    3595 start.go:360] acquireMachinesLock for ha-881000-m02: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:09:25.572800    3595 start.go:364] duration metric: took 430.334µs to acquireMachinesLock for "ha-881000-m02"
	I0816 10:09:25.573027    3595 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:09:25.573044    3595 fix.go:54] fixHost starting: m02
	I0816 10:09:25.573820    3595 fix.go:112] recreateIfNeeded on ha-881000-m02: state=Stopped err=<nil>
	W0816 10:09:25.573847    3595 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:09:25.578378    3595 out.go:177] * Restarting existing qemu2 VM for "ha-881000-m02" ...
	I0816 10:09:25.582360    3595 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:09:25.582628    3595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4d:26:a4:cf:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/disk.qcow2
	I0816 10:09:25.592316    3595 main.go:141] libmachine: STDOUT: 
	I0816 10:09:25.592383    3595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:09:25.592499    3595 fix.go:56] duration metric: took 19.457125ms for fixHost
	I0816 10:09:25.592520    3595 start.go:83] releasing machines lock for "ha-881000-m02", held for 19.623ms
	W0816 10:09:25.592738    3595 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:09:25.597382    3595 out.go:201] 
	W0816 10:09:25.601474    3595 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:09:25.601502    3595 out.go:270] * 
	* 
	W0816 10:09:25.607461    3595 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:09:25.612187    3595 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0816 10:09:20.549813    3595 out.go:345] Setting OutFile to fd 1 ...
I0816 10:09:20.550127    3595 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:09:20.550132    3595 out.go:358] Setting ErrFile to fd 2...
I0816 10:09:20.550135    3595 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:09:20.550307    3595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
I0816 10:09:20.550636    3595 mustload.go:65] Loading cluster: ha-881000
I0816 10:09:20.550951    3595 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0816 10:09:20.551236    3595 host.go:58] "ha-881000-m02" host status: Stopped
I0816 10:09:20.555614    3595 out.go:177] * Starting "ha-881000-m02" control-plane node in "ha-881000" cluster
I0816 10:09:20.558713    3595 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0816 10:09:20.558744    3595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0816 10:09:20.558754    3595 cache.go:56] Caching tarball of preloaded images
I0816 10:09:20.558890    3595 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0816 10:09:20.558898    3595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0816 10:09:20.558990    3595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/ha-881000/config.json ...
I0816 10:09:20.559356    3595 start.go:360] acquireMachinesLock for ha-881000-m02: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 10:09:20.559407    3595 start.go:364] duration metric: took 35.625µs to acquireMachinesLock for "ha-881000-m02"
I0816 10:09:20.559419    3595 start.go:96] Skipping create...Using existing machine configuration
I0816 10:09:20.559426    3595 fix.go:54] fixHost starting: m02
I0816 10:09:20.559559    3595 fix.go:112] recreateIfNeeded on ha-881000-m02: state=Stopped err=<nil>
W0816 10:09:20.559566    3595 fix.go:138] unexpected machine state, will restart: <nil>
I0816 10:09:20.562678    3595 out.go:177] * Restarting existing qemu2 VM for "ha-881000-m02" ...
I0816 10:09:20.566634    3595 qemu.go:418] Using hvf for hardware acceleration
I0816 10:09:20.566691    3595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4d:26:a4:cf:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/disk.qcow2
I0816 10:09:20.569850    3595 main.go:141] libmachine: STDOUT: 
I0816 10:09:20.569873    3595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0816 10:09:20.569901    3595 fix.go:56] duration metric: took 10.474833ms for fixHost
I0816 10:09:20.569907    3595 start.go:83] releasing machines lock for "ha-881000-m02", held for 10.4945ms
W0816 10:09:20.569915    3595 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0816 10:09:20.569958    3595 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0816 10:09:20.569963    3595 start.go:729] Will try again in 5 seconds ...
I0816 10:09:25.572192    3595 start.go:360] acquireMachinesLock for ha-881000-m02: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0816 10:09:25.572800    3595 start.go:364] duration metric: took 430.334µs to acquireMachinesLock for "ha-881000-m02"
I0816 10:09:25.573027    3595 start.go:96] Skipping create...Using existing machine configuration
I0816 10:09:25.573044    3595 fix.go:54] fixHost starting: m02
I0816 10:09:25.573820    3595 fix.go:112] recreateIfNeeded on ha-881000-m02: state=Stopped err=<nil>
W0816 10:09:25.573847    3595 fix.go:138] unexpected machine state, will restart: <nil>
I0816 10:09:25.578378    3595 out.go:177] * Restarting existing qemu2 VM for "ha-881000-m02" ...
I0816 10:09:25.582360    3595 qemu.go:418] Using hvf for hardware acceleration
I0816 10:09:25.582628    3595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:4d:26:a4:cf:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m02/disk.qcow2
I0816 10:09:25.592316    3595 main.go:141] libmachine: STDOUT: 
I0816 10:09:25.592383    3595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0816 10:09:25.592499    3595 fix.go:56] duration metric: took 19.457125ms for fixHost
I0816 10:09:25.592520    3595 start.go:83] releasing machines lock for "ha-881000-m02", held for 19.623ms
W0816 10:09:25.592738    3595 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0816 10:09:25.597382    3595 out.go:201] 
W0816 10:09:25.601474    3595 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0816 10:09:25.601502    3595 out.go:270] * 
* 
W0816 10:09:25.607461    3595 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0816 10:09:25.612187    3595 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-881000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
E0816 10:11:19.065050    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 7 (2m57.201785042s)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-881000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-881000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:09:25.667907    3599 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:09:25.668127    3599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:09:25.668131    3599 out.go:358] Setting ErrFile to fd 2...
	I0816 10:09:25.668134    3599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:09:25.668317    3599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:09:25.668476    3599 out.go:352] Setting JSON to false
	I0816 10:09:25.668491    3599 mustload.go:65] Loading cluster: ha-881000
	I0816 10:09:25.668534    3599 notify.go:220] Checking for updates...
	I0816 10:09:25.668782    3599 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:09:25.668788    3599 status.go:255] checking status of ha-881000 ...
	I0816 10:09:25.669570    3599 status.go:330] ha-881000 host status = "Running" (err=<nil>)
	I0816 10:09:25.669581    3599 host.go:66] Checking if "ha-881000" exists ...
	I0816 10:09:25.669693    3599 host.go:66] Checking if "ha-881000" exists ...
	I0816 10:09:25.669826    3599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:09:25.669836    3599 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/id_rsa Username:docker}
	W0816 10:09:25.670030    3599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0816 10:09:25.670051    3599 retry.go:31] will retry after 180.88714ms: dial tcp 192.168.105.5:22: connect: host is down
	W0816 10:09:25.853227    3599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0816 10:09:25.853278    3599 retry.go:31] will retry after 485.865227ms: dial tcp 192.168.105.5:22: connect: host is down
	W0816 10:09:26.341812    3599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0816 10:09:26.341902    3599 retry.go:31] will retry after 349.897296ms: dial tcp 192.168.105.5:22: connect: host is down
	W0816 10:09:26.693190    3599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0816 10:09:26.693413    3599 retry.go:31] will retry after 189.427938ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0816 10:09:26.885071    3599 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/id_rsa Username:docker}
	W0816 10:09:52.808244    3599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0816 10:09:52.808286    3599 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0816 10:09:52.808293    3599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0816 10:09:52.808298    3599 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 10:09:52.808310    3599 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0816 10:09:52.808315    3599 status.go:255] checking status of ha-881000-m02 ...
	I0816 10:09:52.808501    3599 status.go:330] ha-881000-m02 host status = "Stopped" (err=<nil>)
	I0816 10:09:52.808507    3599 status.go:343] host is not running, skipping remaining checks
	I0816 10:09:52.808509    3599 status.go:257] ha-881000-m02 status: &{Name:ha-881000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:09:52.808513    3599 status.go:255] checking status of ha-881000-m03 ...
	I0816 10:09:52.809144    3599 status.go:330] ha-881000-m03 host status = "Running" (err=<nil>)
	I0816 10:09:52.809150    3599 host.go:66] Checking if "ha-881000-m03" exists ...
	I0816 10:09:52.809269    3599 host.go:66] Checking if "ha-881000-m03" exists ...
	I0816 10:09:52.809406    3599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:09:52.809413    3599 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m03/id_rsa Username:docker}
	W0816 10:11:07.811480    3599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0816 10:11:07.811662    3599 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0816 10:11:07.811698    3599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0816 10:11:07.811716    3599 status.go:257] ha-881000-m03 status: &{Name:ha-881000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0816 10:11:07.811760    3599 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0816 10:11:07.811780    3599 status.go:255] checking status of ha-881000-m04 ...
	I0816 10:11:07.814812    3599 status.go:330] ha-881000-m04 host status = "Running" (err=<nil>)
	I0816 10:11:07.814840    3599 host.go:66] Checking if "ha-881000-m04" exists ...
	I0816 10:11:07.815330    3599 host.go:66] Checking if "ha-881000-m04" exists ...
	I0816 10:11:07.815910    3599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 10:11:07.815942    3599 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000-m04/id_rsa Username:docker}
	W0816 10:12:22.817124    3599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0816 10:12:22.817170    3599 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0816 10:12:22.817178    3599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0816 10:12:22.817183    3599 status.go:257] ha-881000-m04 status: &{Name:ha-881000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0816 10:12:22.817192    3599 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
E0816 10:12:42.150375    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:13:25.933041    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 3 (1m15.039916042s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 10:13:37.852814    3623 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0816 10:13:37.852864    3623 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (257.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-881000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-881000 -v=7 --alsologtostderr
E0816 10:16:19.061638    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:18:25.927412    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:19:49.014168    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-881000 -v=7 --alsologtostderr: (4m38.102773667s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.236329292s)

                                                
                                                
-- stdout --
	* [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	* Restarting existing qemu2 VM for "ha-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:20:47.151981    3761 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:20:47.152210    3761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:20:47.152214    3761 out.go:358] Setting ErrFile to fd 2...
	I0816 10:20:47.152217    3761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:20:47.152391    3761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:20:47.153671    3761 out.go:352] Setting JSON to false
	I0816 10:20:47.173929    3761 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3010,"bootTime":1723825837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:20:47.174003    3761 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:20:47.179275    3761 out.go:177] * [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:20:47.187287    3761 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:20:47.187344    3761 notify.go:220] Checking for updates...
	I0816 10:20:47.195122    3761 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:20:47.199083    3761 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:20:47.202210    3761 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:20:47.205218    3761 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:20:47.208207    3761 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:20:47.211574    3761 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:20:47.211626    3761 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:20:47.216179    3761 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:20:47.223153    3761 start.go:297] selected driver: qemu2
	I0816 10:20:47.223163    3761 start.go:901] validating driver "qemu2" against &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-881000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:20:47.223253    3761 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:20:47.225861    3761 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:20:47.225910    3761 cni.go:84] Creating CNI manager for ""
	I0816 10:20:47.225916    3761 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:20:47.225986    3761 start.go:340] cluster config:
	{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-881000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:20:47.230170    3761 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:20:47.239164    3761 out.go:177] * Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	I0816 10:20:47.243211    3761 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:20:47.243224    3761 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:20:47.243234    3761 cache.go:56] Caching tarball of preloaded images
	I0816 10:20:47.243293    3761 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:20:47.243298    3761 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:20:47.243365    3761 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/ha-881000/config.json ...
	I0816 10:20:47.243810    3761 start.go:360] acquireMachinesLock for ha-881000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:20:47.243844    3761 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "ha-881000"
	I0816 10:20:47.243854    3761 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:20:47.243862    3761 fix.go:54] fixHost starting: 
	I0816 10:20:47.243987    3761 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0816 10:20:47.243995    3761 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:20:47.248174    3761 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0816 10:20:47.255991    3761 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:20:47.256029    3761 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:fc:d8:46:3d:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/disk.qcow2
	I0816 10:20:47.258077    3761 main.go:141] libmachine: STDOUT: 
	I0816 10:20:47.258098    3761 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:20:47.258128    3761 fix.go:56] duration metric: took 14.268167ms for fixHost
	I0816 10:20:47.258134    3761 start.go:83] releasing machines lock for "ha-881000", held for 14.285833ms
	W0816 10:20:47.258140    3761 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:20:47.258173    3761 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:20:47.258178    3761 start.go:729] Will try again in 5 seconds ...
	I0816 10:20:52.260399    3761 start.go:360] acquireMachinesLock for ha-881000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:20:52.260941    3761 start.go:364] duration metric: took 407.583µs to acquireMachinesLock for "ha-881000"
	I0816 10:20:52.261088    3761 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:20:52.261108    3761 fix.go:54] fixHost starting: 
	I0816 10:20:52.261813    3761 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0816 10:20:52.261842    3761 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:20:52.266414    3761 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0816 10:20:52.274453    3761 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:20:52.274682    3761 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:fc:d8:46:3d:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/disk.qcow2
	I0816 10:20:52.284501    3761 main.go:141] libmachine: STDOUT: 
	I0816 10:20:52.284581    3761 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:20:52.284658    3761 fix.go:56] duration metric: took 23.554ms for fixHost
	I0816 10:20:52.284676    3761 start.go:83] releasing machines lock for "ha-881000", held for 23.712833ms
	W0816 10:20:52.284806    3761 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:20:52.292330    3761 out.go:201] 
	W0816 10:20:52.296373    3761 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:20:52.296448    3761 out.go:270] * 
	* 
	W0816 10:20:52.298456    3761 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:20:52.310319    3761 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-881000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-881000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (33.095667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.576417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-881000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-881000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:20:52.451059    3774 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:20:52.451290    3774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:20:52.451294    3774 out.go:358] Setting ErrFile to fd 2...
	I0816 10:20:52.451295    3774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:20:52.451426    3774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:20:52.451648    3774 mustload.go:65] Loading cluster: ha-881000
	I0816 10:20:52.451876    3774 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0816 10:20:52.452193    3774 out.go:270] ! The control-plane node ha-881000 host is not running (will try others): state=Stopped
	! The control-plane node ha-881000 host is not running (will try others): state=Stopped
	W0816 10:20:52.452294    3774 out.go:270] ! The control-plane node ha-881000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-881000-m02 host is not running (will try others): state=Stopped
	I0816 10:20:52.456709    3774 out.go:177] * The control-plane node ha-881000-m03 host is not running: state=Stopped
	I0816 10:20:52.459739    3774 out.go:177]   To start a cluster, run: "minikube start -p ha-881000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-881000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 7 (30.271333ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:20:52.490172    3776 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:20:52.490329    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:20:52.490332    3776 out.go:358] Setting ErrFile to fd 2...
	I0816 10:20:52.490334    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:20:52.490472    3776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:20:52.490605    3776 out.go:352] Setting JSON to false
	I0816 10:20:52.490622    3776 mustload.go:65] Loading cluster: ha-881000
	I0816 10:20:52.490675    3776 notify.go:220] Checking for updates...
	I0816 10:20:52.490843    3776 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:20:52.490849    3776 status.go:255] checking status of ha-881000 ...
	I0816 10:20:52.491060    3776 status.go:330] ha-881000 host status = "Stopped" (err=<nil>)
	I0816 10:20:52.491063    3776 status.go:343] host is not running, skipping remaining checks
	I0816 10:20:52.491065    3776 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:20:52.491075    3776 status.go:255] checking status of ha-881000-m02 ...
	I0816 10:20:52.491161    3776 status.go:330] ha-881000-m02 host status = "Stopped" (err=<nil>)
	I0816 10:20:52.491164    3776 status.go:343] host is not running, skipping remaining checks
	I0816 10:20:52.491166    3776 status.go:257] ha-881000-m02 status: &{Name:ha-881000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:20:52.491170    3776 status.go:255] checking status of ha-881000-m03 ...
	I0816 10:20:52.491254    3776 status.go:330] ha-881000-m03 host status = "Stopped" (err=<nil>)
	I0816 10:20:52.491257    3776 status.go:343] host is not running, skipping remaining checks
	I0816 10:20:52.491258    3776 status.go:257] ha-881000-m03 status: &{Name:ha-881000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:20:52.491266    3776 status.go:255] checking status of ha-881000-m04 ...
	I0816 10:20:52.491367    3776 status.go:330] ha-881000-m04 host status = "Stopped" (err=<nil>)
	I0816 10:20:52.491370    3776 status.go:343] host is not running, skipping remaining checks
	I0816 10:20:52.491371    3776 status.go:257] ha-881000-m04 status: &{Name:ha-881000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (29.521667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.684765542s)
ha_test.go:413: expected profile "ha-881000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (56.337209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 stop -v=7 --alsologtostderr
E0816 10:21:19.056858    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:23:25.903579    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-881000 stop -v=7 --alsologtostderr: (4m11.077396083s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr: exit status 7 (68.664625ms)

                                                
                                                
-- stdout --
	ha-881000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:25:05.383160    3857 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:25:05.383349    3857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:05.383353    3857 out.go:358] Setting ErrFile to fd 2...
	I0816 10:25:05.383357    3857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:05.383520    3857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:25:05.383683    3857 out.go:352] Setting JSON to false
	I0816 10:25:05.383699    3857 mustload.go:65] Loading cluster: ha-881000
	I0816 10:25:05.383740    3857 notify.go:220] Checking for updates...
	I0816 10:25:05.384038    3857 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:05.384045    3857 status.go:255] checking status of ha-881000 ...
	I0816 10:25:05.384368    3857 status.go:330] ha-881000 host status = "Stopped" (err=<nil>)
	I0816 10:25:05.384373    3857 status.go:343] host is not running, skipping remaining checks
	I0816 10:25:05.384376    3857 status.go:257] ha-881000 status: &{Name:ha-881000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:25:05.384390    3857 status.go:255] checking status of ha-881000-m02 ...
	I0816 10:25:05.384524    3857 status.go:330] ha-881000-m02 host status = "Stopped" (err=<nil>)
	I0816 10:25:05.384529    3857 status.go:343] host is not running, skipping remaining checks
	I0816 10:25:05.384532    3857 status.go:257] ha-881000-m02 status: &{Name:ha-881000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:25:05.384537    3857 status.go:255] checking status of ha-881000-m03 ...
	I0816 10:25:05.384668    3857 status.go:330] ha-881000-m03 host status = "Stopped" (err=<nil>)
	I0816 10:25:05.384673    3857 status.go:343] host is not running, skipping remaining checks
	I0816 10:25:05.384675    3857 status.go:257] ha-881000-m03 status: &{Name:ha-881000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 10:25:05.384683    3857 status.go:255] checking status of ha-881000-m04 ...
	I0816 10:25:05.384803    3857 status.go:330] ha-881000-m04 host status = "Stopped" (err=<nil>)
	I0816 10:25:05.384807    3857 status.go:343] host is not running, skipping remaining checks
	I0816 10:25:05.384809    3857 status.go:257] ha-881000-m04 status: &{Name:ha-881000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr": ha-881000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-881000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (32.353833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (251.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.1784305s)

                                                
                                                
-- stdout --
	* [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	* Restarting existing qemu2 VM for "ha-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:25:05.446531    3861 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:25:05.446656    3861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:05.446659    3861 out.go:358] Setting ErrFile to fd 2...
	I0816 10:25:05.446662    3861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:05.446797    3861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:25:05.447763    3861 out.go:352] Setting JSON to false
	I0816 10:25:05.463897    3861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3268,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:25:05.463962    3861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:25:05.467905    3861 out.go:177] * [ha-881000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:25:05.473797    3861 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:25:05.473892    3861 notify.go:220] Checking for updates...
	I0816 10:25:05.480843    3861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:25:05.483819    3861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:25:05.486797    3861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:25:05.489812    3861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:25:05.492708    3861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:25:05.496111    3861 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:25:05.496373    3861 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:25:05.500813    3861 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:25:05.507804    3861 start.go:297] selected driver: qemu2
	I0816 10:25:05.507810    3861 start.go:901] validating driver "qemu2" against &{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-881000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:25:05.507879    3861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:25:05.510053    3861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:25:05.510099    3861 cni.go:84] Creating CNI manager for ""
	I0816 10:25:05.510104    3861 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0816 10:25:05.510163    3861 start.go:340] cluster config:
	{Name:ha-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-881000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:25:05.513533    3861 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:25:05.521734    3861 out.go:177] * Starting "ha-881000" primary control-plane node in "ha-881000" cluster
	I0816 10:25:05.525757    3861 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:25:05.525774    3861 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:25:05.525788    3861 cache.go:56] Caching tarball of preloaded images
	I0816 10:25:05.525865    3861 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:25:05.525871    3861 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:25:05.525947    3861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/ha-881000/config.json ...
	I0816 10:25:05.526390    3861 start.go:360] acquireMachinesLock for ha-881000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:25:05.526425    3861 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "ha-881000"
	I0816 10:25:05.526434    3861 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:25:05.526440    3861 fix.go:54] fixHost starting: 
	I0816 10:25:05.526562    3861 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0816 10:25:05.526571    3861 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:25:05.530779    3861 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0816 10:25:05.538774    3861 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:25:05.538810    3861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:fc:d8:46:3d:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/disk.qcow2
	I0816 10:25:05.540901    3861 main.go:141] libmachine: STDOUT: 
	I0816 10:25:05.540923    3861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:25:05.540953    3861 fix.go:56] duration metric: took 14.514042ms for fixHost
	I0816 10:25:05.540958    3861 start.go:83] releasing machines lock for "ha-881000", held for 14.5285ms
	W0816 10:25:05.540965    3861 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:25:05.541008    3861 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:25:05.541013    3861 start.go:729] Will try again in 5 seconds ...
	I0816 10:25:10.543060    3861 start.go:360] acquireMachinesLock for ha-881000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:25:10.543579    3861 start.go:364] duration metric: took 352.542µs to acquireMachinesLock for "ha-881000"
	I0816 10:25:10.543737    3861 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:25:10.543763    3861 fix.go:54] fixHost starting: 
	I0816 10:25:10.544484    3861 fix.go:112] recreateIfNeeded on ha-881000: state=Stopped err=<nil>
	W0816 10:25:10.544509    3861 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:25:10.548940    3861 out.go:177] * Restarting existing qemu2 VM for "ha-881000" ...
	I0816 10:25:10.552903    3861 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:25:10.553143    3861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:fc:d8:46:3d:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/ha-881000/disk.qcow2
	I0816 10:25:10.562096    3861 main.go:141] libmachine: STDOUT: 
	I0816 10:25:10.562174    3861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:25:10.562258    3861 fix.go:56] duration metric: took 18.496334ms for fixHost
	I0816 10:25:10.562276    3861 start.go:83] releasing machines lock for "ha-881000", held for 18.637667ms
	W0816 10:25:10.562492    3861 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:25:10.569859    3861 out.go:201] 
	W0816 10:25:10.573883    3861 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:25:10.573948    3861 out.go:270] * 
	* 
	W0816 10:25:10.576676    3861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:25:10.588818    3861 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-881000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (67.672958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-881000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-881000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-881000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-881000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (29.093791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-881000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-881000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.431291ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-881000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-881000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:25:10.771615    3878 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:25:10.771766    3878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:10.771769    3878 out.go:358] Setting ErrFile to fd 2...
	I0816 10:25:10.771772    3878 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:10.771906    3878 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:25:10.772110    3878 mustload.go:65] Loading cluster: ha-881000
	I0816 10:25:10.772308    3878 config.go:182] Loaded profile config "ha-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0816 10:25:10.772616    3878 out.go:270] ! The control-plane node ha-881000 host is not running (will try others): state=Stopped
	! The control-plane node ha-881000 host is not running (will try others): state=Stopped
	W0816 10:25:10.772720    3878 out.go:270] ! The control-plane node ha-881000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-881000-m02 host is not running (will try others): state=Stopped
	I0816 10:25:10.777201    3878 out.go:177] * The control-plane node ha-881000-m03 host is not running: state=Stopped
	I0816 10:25:10.781115    3878 out.go:177]   To start a cluster, run: "minikube start -p ha-881000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-881000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-881000 -n ha-881000: exit status 7 (29.462041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-881000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-395000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-395000 --driver=qemu2 : exit status 80 (10.048336s)

                                                
                                                
-- stdout --
	* [image-395000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-395000" primary control-plane node in "image-395000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-395000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-395000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-395000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-395000 -n image-395000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-395000 -n image-395000: exit status 7 (68.890167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-395000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-336000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-336000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.78647525s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f290b5b1-6a6e-4e28-9bc4-06c5cb78c9b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-336000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3dfd99e6-89e3-4d63-aac6-5e51f1cb109f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"c2dd2b5c-f9a6-41bb-8dda-676b5f6fa630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig"}}
	{"specversion":"1.0","id":"641a3e85-50cd-4846-adb3-604b2ce393c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d717b5eb-570a-42fd-ab42-af437d1f2cc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b8d1122-1cbd-4929-a6f5-816558bba420","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube"}}
	{"specversion":"1.0","id":"84aada69-e588-4022-8294-361200ebb3f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"90149fca-3ff8-4fcb-a3b4-2c4f4adff3da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"daa750cc-7897-49c3-a6b2-c87abb588458","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2cbdb051-5bdf-4bc6-ba3a-d035eb371ee6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-336000\" primary control-plane node in \"json-output-336000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"289aed86-2cee-4da7-a1fe-a3367c3a7c56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"b0e0b5c6-ceb4-4cde-8f9c-962db1d394aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-336000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"60cf023e-b489-4b70-9de7-e60f9d4222ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"389acd6f-a852-474a-85e4-1b3bb6e90e7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"92cc944a-7adc-482f-9874-d470a68c0fa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-336000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ee3e5916-55dc-4f95-8cf4-983d555534c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"d02a20e5-64a0-4657-8098-93563dec4606","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-336000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-336000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-336000 --output=json --user=testUser: exit status 83 (75.734042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de0d315d-1a7d-417b-8703-2cc20ec28dd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-336000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"ce4935c1-b746-4e4d-babc-53f69d7c5144","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-336000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-336000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-336000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-336000 --output=json --user=testUser: exit status 83 (45.103042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-336000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-336000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-336000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-336000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-701000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-701000 --driver=qemu2 : exit status 80 (9.853221834s)

                                                
                                                
-- stdout --
	* [first-701000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-701000" primary control-plane node in "first-701000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-701000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-701000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-16 10:25:44.916229 -0700 PDT m=+2294.154270459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-703000 -n second-703000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-703000 -n second-703000: exit status 85 (77.939625ms)

                                                
                                                
-- stdout --
	* Profile "second-703000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-703000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-703000" host is not running, skipping log retrieval (state="* Profile \"second-703000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-703000\"")
helpers_test.go:175: Cleaning up "second-703000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-703000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-16 10:25:45.098159 -0700 PDT m=+2294.336204501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-701000 -n first-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-701000 -n first-701000: exit status 7 (29.858083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-701000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-701000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-701000
--- FAIL: TestMinikubeProfile (10.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-009000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-009000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.906955041s)

                                                
                                                
-- stdout --
	* [mount-start-1-009000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-009000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-009000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-009000 -n mount-start-1-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-009000 -n mount-start-1-009000: exit status 7 (68.068042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-009000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-420000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-420000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.906735416s)

                                                
                                                
-- stdout --
	* [multinode-420000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-420000" primary control-plane node in "multinode-420000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-420000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:25:55.387965    4022 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:25:55.388088    4022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:55.388092    4022 out.go:358] Setting ErrFile to fd 2...
	I0816 10:25:55.388094    4022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:25:55.388223    4022 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:25:55.389305    4022 out.go:352] Setting JSON to false
	I0816 10:25:55.405729    4022 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3318,"bootTime":1723825837,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:25:55.405805    4022 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:25:55.412530    4022 out.go:177] * [multinode-420000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:25:55.420571    4022 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:25:55.420604    4022 notify.go:220] Checking for updates...
	I0816 10:25:55.427485    4022 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:25:55.430521    4022 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:25:55.432018    4022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:25:55.435494    4022 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:25:55.438472    4022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:25:55.441757    4022 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:25:55.446478    4022 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:25:55.453413    4022 start.go:297] selected driver: qemu2
	I0816 10:25:55.453420    4022 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:25:55.453426    4022 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:25:55.455679    4022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:25:55.459466    4022 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:25:55.462577    4022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:25:55.462633    4022 cni.go:84] Creating CNI manager for ""
	I0816 10:25:55.462639    4022 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0816 10:25:55.462642    4022 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:25:55.462674    4022 start.go:340] cluster config:
	{Name:multinode-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:25:55.466369    4022 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:25:55.474507    4022 out.go:177] * Starting "multinode-420000" primary control-plane node in "multinode-420000" cluster
	I0816 10:25:55.478458    4022 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:25:55.478473    4022 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:25:55.478491    4022 cache.go:56] Caching tarball of preloaded images
	I0816 10:25:55.478554    4022 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:25:55.478560    4022 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:25:55.478781    4022 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/multinode-420000/config.json ...
	I0816 10:25:55.478792    4022 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/multinode-420000/config.json: {Name:mkbe24d609facddd146c8c9fde0b3c6d0dbf75c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:25:55.479015    4022 start.go:360] acquireMachinesLock for multinode-420000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:25:55.479051    4022 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "multinode-420000"
	I0816 10:25:55.479064    4022 start.go:93] Provisioning new machine with config: &{Name:multinode-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:25:55.479089    4022 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:25:55.487530    4022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:25:55.505349    4022 start.go:159] libmachine.API.Create for "multinode-420000" (driver="qemu2")
	I0816 10:25:55.505383    4022 client.go:168] LocalClient.Create starting
	I0816 10:25:55.505443    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:25:55.505471    4022 main.go:141] libmachine: Decoding PEM data...
	I0816 10:25:55.505480    4022 main.go:141] libmachine: Parsing certificate...
	I0816 10:25:55.505521    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:25:55.505543    4022 main.go:141] libmachine: Decoding PEM data...
	I0816 10:25:55.505551    4022 main.go:141] libmachine: Parsing certificate...
	I0816 10:25:55.505907    4022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:25:55.652943    4022 main.go:141] libmachine: Creating SSH key...
	I0816 10:25:55.845393    4022 main.go:141] libmachine: Creating Disk image...
	I0816 10:25:55.845399    4022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:25:55.845603    4022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:25:55.855179    4022 main.go:141] libmachine: STDOUT: 
	I0816 10:25:55.855195    4022 main.go:141] libmachine: STDERR: 
	I0816 10:25:55.855246    4022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2 +20000M
	I0816 10:25:55.863120    4022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:25:55.863134    4022 main.go:141] libmachine: STDERR: 
	I0816 10:25:55.863148    4022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:25:55.863153    4022 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:25:55.863168    4022 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:25:55.863199    4022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a6:f0:4a:1b:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:25:55.864823    4022 main.go:141] libmachine: STDOUT: 
	I0816 10:25:55.864840    4022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:25:55.864860    4022 client.go:171] duration metric: took 359.481625ms to LocalClient.Create
	I0816 10:25:57.867001    4022 start.go:128] duration metric: took 2.387940708s to createHost
	I0816 10:25:57.867110    4022 start.go:83] releasing machines lock for "multinode-420000", held for 2.388063208s
	W0816 10:25:57.867169    4022 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:25:57.880381    4022 out.go:177] * Deleting "multinode-420000" in qemu2 ...
	W0816 10:25:57.913644    4022 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:25:57.913671    4022 start.go:729] Will try again in 5 seconds ...
	I0816 10:26:02.915887    4022 start.go:360] acquireMachinesLock for multinode-420000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:26:02.916313    4022 start.go:364] duration metric: took 335.917µs to acquireMachinesLock for "multinode-420000"
	I0816 10:26:02.916446    4022 start.go:93] Provisioning new machine with config: &{Name:multinode-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:26:02.916704    4022 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:26:02.935550    4022 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:26:02.985956    4022 start.go:159] libmachine.API.Create for "multinode-420000" (driver="qemu2")
	I0816 10:26:02.986006    4022 client.go:168] LocalClient.Create starting
	I0816 10:26:02.986124    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:26:02.986179    4022 main.go:141] libmachine: Decoding PEM data...
	I0816 10:26:02.986197    4022 main.go:141] libmachine: Parsing certificate...
	I0816 10:26:02.986281    4022 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:26:02.986325    4022 main.go:141] libmachine: Decoding PEM data...
	I0816 10:26:02.986336    4022 main.go:141] libmachine: Parsing certificate...
	I0816 10:26:02.986839    4022 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:26:03.144377    4022 main.go:141] libmachine: Creating SSH key...
	I0816 10:26:03.201842    4022 main.go:141] libmachine: Creating Disk image...
	I0816 10:26:03.201847    4022 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:26:03.202020    4022 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:26:03.211314    4022 main.go:141] libmachine: STDOUT: 
	I0816 10:26:03.211334    4022 main.go:141] libmachine: STDERR: 
	I0816 10:26:03.211385    4022 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2 +20000M
	I0816 10:26:03.219256    4022 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:26:03.219274    4022 main.go:141] libmachine: STDERR: 
	I0816 10:26:03.219286    4022 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:26:03.219292    4022 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:26:03.219300    4022 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:26:03.219330    4022 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:a8:6a:56:05:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:26:03.220914    4022 main.go:141] libmachine: STDOUT: 
	I0816 10:26:03.220931    4022 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:26:03.220943    4022 client.go:171] duration metric: took 234.938ms to LocalClient.Create
	I0816 10:26:05.223081    4022 start.go:128] duration metric: took 2.30637575s to createHost
	I0816 10:26:05.223145    4022 start.go:83] releasing machines lock for "multinode-420000", held for 2.306856458s
	W0816 10:26:05.223433    4022 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-420000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:26:05.237968    4022 out.go:201] 
	W0816 10:26:05.243253    4022 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:26:05.243298    4022 out.go:270] * 
	* 
	W0816 10:26:05.245777    4022 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:26:05.254070    4022 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-420000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (65.733792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (102.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.609792ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-420000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- rollout status deployment/busybox: exit status 1 (58.384959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.292125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.492292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.3615ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.903917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.709875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0816 10:26:19.031334    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.917709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.56125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.753209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.598083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.085958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.232333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.882041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.814917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.217959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.101167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (29.111375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (102.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-420000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (29.983542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-420000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-420000 -v 3 --alsologtostderr: exit status 83 (41.664833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-420000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-420000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:47.479843    4113 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:47.479989    4113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.479991    4113 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:47.479994    4113 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.480114    4113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:47.480348    4113 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:47.480533    4113 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:47.484472    4113 out.go:177] * The control-plane node multinode-420000 host is not running: state=Stopped
	I0816 10:27:47.488489    4113 out.go:177]   To start a cluster, run: "minikube start -p multinode-420000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-420000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (28.855458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-420000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-420000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.719833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-420000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-420000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-420000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (29.247209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-420000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-420000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-420000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-420000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (28.860792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status --output json --alsologtostderr: exit status 7 (29.224666ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-420000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:47.683118    4125 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:47.683270    4125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.683273    4125 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:47.683275    4125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.683398    4125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:47.683508    4125 out.go:352] Setting JSON to true
	I0816 10:27:47.683523    4125 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:47.683588    4125 notify.go:220] Checking for updates...
	I0816 10:27:47.683716    4125 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:47.683723    4125 status.go:255] checking status of multinode-420000 ...
	I0816 10:27:47.683919    4125 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:27:47.683923    4125 status.go:343] host is not running, skipping remaining checks
	I0816 10:27:47.683925    4125 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-420000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (29.634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 node stop m03: exit status 85 (47.295709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-420000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status: exit status 7 (29.770916ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr: exit status 7 (29.635375ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:47.820253    4133 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:47.820379    4133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.820382    4133 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:47.820384    4133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.820516    4133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:47.820639    4133 out.go:352] Setting JSON to false
	I0816 10:27:47.820651    4133 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:47.820706    4133 notify.go:220] Checking for updates...
	I0816 10:27:47.820834    4133 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:47.820842    4133 status.go:255] checking status of multinode-420000 ...
	I0816 10:27:47.821034    4133 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:27:47.821038    4133 status.go:343] host is not running, skipping remaining checks
	I0816 10:27:47.821040    4133 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr": multinode-420000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (28.938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (55.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.105709ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:47.879634    4137 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:47.879858    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.879860    4137 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:47.879863    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.880012    4137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:47.880248    4137 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:47.880430    4137 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:47.883530    4137 out.go:201] 
	W0816 10:27:47.886467    4137 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0816 10:27:47.886472    4137 out.go:270] * 
	* 
	W0816 10:27:47.888096    4137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:27:47.891480    4137 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0816 10:27:47.879634    4137 out.go:345] Setting OutFile to fd 1 ...
I0816 10:27:47.879858    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:27:47.879860    4137 out.go:358] Setting ErrFile to fd 2...
I0816 10:27:47.879863    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 10:27:47.880012    4137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
I0816 10:27:47.880248    4137 mustload.go:65] Loading cluster: multinode-420000
I0816 10:27:47.880430    4137 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 10:27:47.883530    4137 out.go:201] 
W0816 10:27:47.886467    4137 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0816 10:27:47.886472    4137 out.go:270] * 
* 
W0816 10:27:47.888096    4137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0816 10:27:47.891480    4137 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-420000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (29.015459ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:47.923801    4139 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:47.923941    4139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.923944    4139 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:47.923947    4139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:47.924055    4139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:47.924174    4139 out.go:352] Setting JSON to false
	I0816 10:27:47.924185    4139 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:47.924233    4139 notify.go:220] Checking for updates...
	I0816 10:27:47.924419    4139 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:47.924424    4139 status.go:255] checking status of multinode-420000 ...
	I0816 10:27:47.924622    4139 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:27:47.924626    4139 status.go:343] host is not running, skipping remaining checks
	I0816 10:27:47.924628    4139 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (73.444334ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:49.118679    4141 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:49.118854    4141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:49.118858    4141 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:49.118862    4141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:49.119019    4141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:49.119160    4141 out.go:352] Setting JSON to false
	I0816 10:27:49.119175    4141 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:49.119221    4141 notify.go:220] Checking for updates...
	I0816 10:27:49.119450    4141 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:49.119456    4141 status.go:255] checking status of multinode-420000 ...
	I0816 10:27:49.119725    4141 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:27:49.119730    4141 status.go:343] host is not running, skipping remaining checks
	I0816 10:27:49.119733    4141 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (71.075833ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:51.035855    4143 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:51.036058    4143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:51.036062    4143 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:51.036065    4143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:51.036264    4143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:51.036436    4143 out.go:352] Setting JSON to false
	I0816 10:27:51.036451    4143 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:51.036496    4143 notify.go:220] Checking for updates...
	I0816 10:27:51.036762    4143 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:51.036770    4143 status.go:255] checking status of multinode-420000 ...
	I0816 10:27:51.037085    4143 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:27:51.037090    4143 status.go:343] host is not running, skipping remaining checks
	I0816 10:27:51.037093    4143 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (72.472ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:52.411490    4145 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:52.411671    4145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:52.411675    4145 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:52.411679    4145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:52.411870    4145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:52.412030    4145 out.go:352] Setting JSON to false
	I0816 10:27:52.412045    4145 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:52.412089    4145 notify.go:220] Checking for updates...
	I0816 10:27:52.412301    4145 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:52.412307    4145 status.go:255] checking status of multinode-420000 ...
	I0816 10:27:52.412571    4145 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:27:52.412576    4145 status.go:343] host is not running, skipping remaining checks
	I0816 10:27:52.412579    4145 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (72.351958ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:27:57.270424    4149 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:27:57.270608    4149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:57.270612    4149 out.go:358] Setting ErrFile to fd 2...
	I0816 10:27:57.270616    4149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:27:57.270786    4149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:27:57.270942    4149 out.go:352] Setting JSON to false
	I0816 10:27:57.270956    4149 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:27:57.270995    4149 notify.go:220] Checking for updates...
	I0816 10:27:57.271194    4149 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:27:57.271201    4149 status.go:255] checking status of multinode-420000 ...
	I0816 10:27:57.271467    4149 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:27:57.271472    4149 status.go:343] host is not running, skipping remaining checks
	I0816 10:27:57.271475    4149 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (72.743208ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:02.900531    4151 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:02.900742    4151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:02.900747    4151 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:02.900750    4151 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:02.900942    4151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:02.901106    4151 out.go:352] Setting JSON to false
	I0816 10:28:02.901121    4151 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:28:02.901162    4151 notify.go:220] Checking for updates...
	I0816 10:28:02.901383    4151 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:02.901390    4151 status.go:255] checking status of multinode-420000 ...
	I0816 10:28:02.901665    4151 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:28:02.901670    4151 status.go:343] host is not running, skipping remaining checks
	I0816 10:28:02.901673    4151 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (73.656292ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:07.084010    4153 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:07.084217    4153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:07.084221    4153 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:07.084224    4153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:07.084415    4153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:07.084570    4153 out.go:352] Setting JSON to false
	I0816 10:28:07.084585    4153 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:28:07.084630    4153 notify.go:220] Checking for updates...
	I0816 10:28:07.084822    4153 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:07.084829    4153 status.go:255] checking status of multinode-420000 ...
	I0816 10:28:07.085111    4153 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:28:07.085117    4153 status.go:343] host is not running, skipping remaining checks
	I0816 10:28:07.085119    4153 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (71.142125ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:18.709454    4157 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:18.709639    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:18.709643    4157 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:18.709647    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:18.709825    4157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:18.709972    4157 out.go:352] Setting JSON to false
	I0816 10:28:18.709987    4157 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:28:18.710027    4157 notify.go:220] Checking for updates...
	I0816 10:28:18.710274    4157 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:18.710282    4157 status.go:255] checking status of multinode-420000 ...
	I0816 10:28:18.710576    4157 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:28:18.710581    4157 status.go:343] host is not running, skipping remaining checks
	I0816 10:28:18.710584    4157 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0816 10:28:25.895998    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr: exit status 7 (72.398458ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:43.099699    4166 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:43.099931    4166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:43.099937    4166 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:43.099940    4166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:43.100131    4166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:43.100293    4166 out.go:352] Setting JSON to false
	I0816 10:28:43.100311    4166 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:28:43.100350    4166 notify.go:220] Checking for updates...
	I0816 10:28:43.100567    4166 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:43.100578    4166 status.go:255] checking status of multinode-420000 ...
	I0816 10:28:43.100848    4166 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:28:43.100853    4166 status.go:343] host is not running, skipping remaining checks
	I0816 10:28:43.100856    4166 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-420000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (33.390041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (55.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-420000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-420000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-420000: (3.050523s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-420000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-420000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.222402667s)

                                                
                                                
-- stdout --
	* [multinode-420000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-420000" primary control-plane node in "multinode-420000" cluster
	* Restarting existing qemu2 VM for "multinode-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:46.279209    4193 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:46.279421    4193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:46.279425    4193 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:46.279428    4193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:46.279602    4193 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:46.280873    4193 out.go:352] Setting JSON to false
	I0816 10:28:46.300498    4193 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3489,"bootTime":1723825837,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:28:46.300567    4193 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:28:46.305544    4193 out.go:177] * [multinode-420000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:28:46.312418    4193 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:28:46.312458    4193 notify.go:220] Checking for updates...
	I0816 10:28:46.319371    4193 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:28:46.322421    4193 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:28:46.325488    4193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:28:46.328489    4193 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:28:46.331393    4193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:28:46.334763    4193 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:46.334816    4193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:28:46.339347    4193 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:28:46.346406    4193 start.go:297] selected driver: qemu2
	I0816 10:28:46.346413    4193 start.go:901] validating driver "qemu2" against &{Name:multinode-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:28:46.346467    4193 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:28:46.348931    4193 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:28:46.348959    4193 cni.go:84] Creating CNI manager for ""
	I0816 10:28:46.348968    4193 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:28:46.349009    4193 start.go:340] cluster config:
	{Name:multinode-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-420000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:28:46.352747    4193 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:28:46.361421    4193 out.go:177] * Starting "multinode-420000" primary control-plane node in "multinode-420000" cluster
	I0816 10:28:46.365360    4193 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:28:46.365374    4193 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:28:46.365383    4193 cache.go:56] Caching tarball of preloaded images
	I0816 10:28:46.365439    4193 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:28:46.365444    4193 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:28:46.365503    4193 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/multinode-420000/config.json ...
	I0816 10:28:46.365944    4193 start.go:360] acquireMachinesLock for multinode-420000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:28:46.365982    4193 start.go:364] duration metric: took 31.209µs to acquireMachinesLock for "multinode-420000"
	I0816 10:28:46.365995    4193 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:28:46.366001    4193 fix.go:54] fixHost starting: 
	I0816 10:28:46.366149    4193 fix.go:112] recreateIfNeeded on multinode-420000: state=Stopped err=<nil>
	W0816 10:28:46.366159    4193 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:28:46.373349    4193 out.go:177] * Restarting existing qemu2 VM for "multinode-420000" ...
	I0816 10:28:46.377395    4193 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:28:46.377453    4193 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:a8:6a:56:05:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:28:46.379790    4193 main.go:141] libmachine: STDOUT: 
	I0816 10:28:46.379814    4193 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:28:46.379848    4193 fix.go:56] duration metric: took 13.848917ms for fixHost
	I0816 10:28:46.379853    4193 start.go:83] releasing machines lock for "multinode-420000", held for 13.863917ms
	W0816 10:28:46.379860    4193 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:28:46.379906    4193 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:28:46.379911    4193 start.go:729] Will try again in 5 seconds ...
	I0816 10:28:51.382001    4193 start.go:360] acquireMachinesLock for multinode-420000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:28:51.382406    4193 start.go:364] duration metric: took 315.708µs to acquireMachinesLock for "multinode-420000"
	I0816 10:28:51.382543    4193 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:28:51.382562    4193 fix.go:54] fixHost starting: 
	I0816 10:28:51.383261    4193 fix.go:112] recreateIfNeeded on multinode-420000: state=Stopped err=<nil>
	W0816 10:28:51.383292    4193 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:28:51.387789    4193 out.go:177] * Restarting existing qemu2 VM for "multinode-420000" ...
	I0816 10:28:51.395680    4193 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:28:51.395926    4193 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:a8:6a:56:05:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:28:51.405158    4193 main.go:141] libmachine: STDOUT: 
	I0816 10:28:51.405258    4193 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:28:51.405348    4193 fix.go:56] duration metric: took 22.788208ms for fixHost
	I0816 10:28:51.405361    4193 start.go:83] releasing machines lock for "multinode-420000", held for 22.93325ms
	W0816 10:28:51.405543    4193 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:28:51.411603    4193 out.go:201] 
	W0816 10:28:51.415722    4193 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:28:51.415763    4193 out.go:270] * 
	* 
	W0816 10:28:51.418236    4193 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:28:51.426734    4193 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-420000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-420000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (32.525125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 node delete m03: exit status 83 (38.781334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-420000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-420000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-420000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr: exit status 7 (29.331792ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:51.608835    4209 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:51.608996    4209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:51.608999    4209 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:51.609001    4209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:51.609116    4209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:51.609228    4209 out.go:352] Setting JSON to false
	I0816 10:28:51.609240    4209 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:28:51.609314    4209 notify.go:220] Checking for updates...
	I0816 10:28:51.609440    4209 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:51.609450    4209 status.go:255] checking status of multinode-420000 ...
	I0816 10:28:51.609660    4209 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:28:51.609664    4209 status.go:343] host is not running, skipping remaining checks
	I0816 10:28:51.609666    4209 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (30.430709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-420000 stop: (3.841233958s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status: exit status 7 (64.376625ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr: exit status 7 (31.536542ms)

                                                
                                                
-- stdout --
	multinode-420000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:55.577049    4235 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:55.577179    4235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:55.577182    4235 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:55.577184    4235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:55.577298    4235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:55.577417    4235 out.go:352] Setting JSON to false
	I0816 10:28:55.577432    4235 mustload.go:65] Loading cluster: multinode-420000
	I0816 10:28:55.577499    4235 notify.go:220] Checking for updates...
	I0816 10:28:55.577612    4235 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:55.577617    4235 status.go:255] checking status of multinode-420000 ...
	I0816 10:28:55.577817    4235 status.go:330] multinode-420000 host status = "Stopped" (err=<nil>)
	I0816 10:28:55.577821    4235 status.go:343] host is not running, skipping remaining checks
	I0816 10:28:55.577823    4235 status.go:257] multinode-420000 status: &{Name:multinode-420000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr": multinode-420000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-420000 status --alsologtostderr": multinode-420000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (30.382917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-420000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-420000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.177890458s)

                                                
                                                
-- stdout --
	* [multinode-420000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-420000" primary control-plane node in "multinode-420000" cluster
	* Restarting existing qemu2 VM for "multinode-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-420000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:28:55.636880    4239 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:28:55.637022    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:55.637026    4239 out.go:358] Setting ErrFile to fd 2...
	I0816 10:28:55.637028    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:28:55.637170    4239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:28:55.638166    4239 out.go:352] Setting JSON to false
	I0816 10:28:55.653988    4239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3498,"bootTime":1723825837,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:28:55.654055    4239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:28:55.661889    4239 out.go:177] * [multinode-420000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:28:55.665933    4239 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:28:55.665969    4239 notify.go:220] Checking for updates...
	I0816 10:28:55.671852    4239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:28:55.674866    4239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:28:55.676163    4239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:28:55.678874    4239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:28:55.681896    4239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:28:55.685231    4239 config.go:182] Loaded profile config "multinode-420000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:28:55.685512    4239 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:28:55.689826    4239 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:28:55.696842    4239 start.go:297] selected driver: qemu2
	I0816 10:28:55.696851    4239 start.go:901] validating driver "qemu2" against &{Name:multinode-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-420000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:28:55.696909    4239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:28:55.699168    4239 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:28:55.699206    4239 cni.go:84] Creating CNI manager for ""
	I0816 10:28:55.699211    4239 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0816 10:28:55.699253    4239 start.go:340] cluster config:
	{Name:multinode-420000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-420000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:28:55.702628    4239 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:28:55.709921    4239 out.go:177] * Starting "multinode-420000" primary control-plane node in "multinode-420000" cluster
	I0816 10:28:55.713848    4239 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:28:55.713864    4239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:28:55.713873    4239 cache.go:56] Caching tarball of preloaded images
	I0816 10:28:55.713921    4239 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:28:55.713926    4239 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:28:55.713998    4239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/multinode-420000/config.json ...
	I0816 10:28:55.714421    4239 start.go:360] acquireMachinesLock for multinode-420000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:28:55.714447    4239 start.go:364] duration metric: took 20.917µs to acquireMachinesLock for "multinode-420000"
	I0816 10:28:55.714458    4239 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:28:55.714465    4239 fix.go:54] fixHost starting: 
	I0816 10:28:55.714574    4239 fix.go:112] recreateIfNeeded on multinode-420000: state=Stopped err=<nil>
	W0816 10:28:55.714583    4239 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:28:55.718874    4239 out.go:177] * Restarting existing qemu2 VM for "multinode-420000" ...
	I0816 10:28:55.726869    4239 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:28:55.726905    4239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:a8:6a:56:05:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:28:55.728883    4239 main.go:141] libmachine: STDOUT: 
	I0816 10:28:55.728900    4239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:28:55.728926    4239 fix.go:56] duration metric: took 14.461916ms for fixHost
	I0816 10:28:55.728930    4239 start.go:83] releasing machines lock for "multinode-420000", held for 14.478083ms
	W0816 10:28:55.728935    4239 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:28:55.728974    4239 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:28:55.728978    4239 start.go:729] Will try again in 5 seconds ...
	I0816 10:29:00.731025    4239 start.go:360] acquireMachinesLock for multinode-420000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:29:00.731437    4239 start.go:364] duration metric: took 297µs to acquireMachinesLock for "multinode-420000"
	I0816 10:29:00.731571    4239 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:29:00.731592    4239 fix.go:54] fixHost starting: 
	I0816 10:29:00.732309    4239 fix.go:112] recreateIfNeeded on multinode-420000: state=Stopped err=<nil>
	W0816 10:29:00.732336    4239 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:29:00.740811    4239 out.go:177] * Restarting existing qemu2 VM for "multinode-420000" ...
	I0816 10:29:00.743667    4239 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:29:00.743924    4239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:a8:6a:56:05:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/multinode-420000/disk.qcow2
	I0816 10:29:00.753315    4239 main.go:141] libmachine: STDOUT: 
	I0816 10:29:00.753400    4239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:29:00.753499    4239 fix.go:56] duration metric: took 21.911084ms for fixHost
	I0816 10:29:00.753525    4239 start.go:83] releasing machines lock for "multinode-420000", held for 22.067291ms
	W0816 10:29:00.753711    4239 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-420000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:29:00.759802    4239 out.go:201] 
	W0816 10:29:00.763806    4239 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:29:00.763835    4239 out.go:270] * 
	* 
	W0816 10:29:00.766383    4239 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:29:00.773807    4239 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-420000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (68.120541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-420000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-420000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-420000-m01 --driver=qemu2 : exit status 80 (10.106702042s)

                                                
                                                
-- stdout --
	* [multinode-420000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-420000-m01" primary control-plane node in "multinode-420000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-420000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-420000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-420000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-420000-m02 --driver=qemu2 : exit status 80 (9.928892166s)

                                                
                                                
-- stdout --
	* [multinode-420000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-420000-m02" primary control-plane node in "multinode-420000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-420000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-420000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-420000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-420000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-420000: exit status 83 (83.32075ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-420000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-420000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-420000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-420000 -n multinode-420000: exit status 7 (29.432666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-420000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.26s)

                                                
                                    
x
+
TestPreload (9.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-231000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0816 10:29:22.114418    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-231000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.771802084s)

                                                
                                                
-- stdout --
	* [test-preload-231000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-231000" primary control-plane node in "test-preload-231000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-231000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:29:21.254540    4298 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:29:21.254679    4298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:29:21.254682    4298 out.go:358] Setting ErrFile to fd 2...
	I0816 10:29:21.254685    4298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:29:21.254825    4298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:29:21.255951    4298 out.go:352] Setting JSON to false
	I0816 10:29:21.271974    4298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3524,"bootTime":1723825837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:29:21.272065    4298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:29:21.278177    4298 out.go:177] * [test-preload-231000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:29:21.286194    4298 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:29:21.286240    4298 notify.go:220] Checking for updates...
	I0816 10:29:21.294203    4298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:29:21.297180    4298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:29:21.300157    4298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:29:21.303199    4298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:29:21.306133    4298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:29:21.309585    4298 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:29:21.309643    4298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:29:21.314180    4298 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:29:21.321193    4298 start.go:297] selected driver: qemu2
	I0816 10:29:21.321202    4298 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:29:21.321209    4298 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:29:21.323647    4298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:29:21.326216    4298 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:29:21.329257    4298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:29:21.329297    4298 cni.go:84] Creating CNI manager for ""
	I0816 10:29:21.329305    4298 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:29:21.329309    4298 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:29:21.329346    4298 start.go:340] cluster config:
	{Name:test-preload-231000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:29:21.333117    4298 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.340247    4298 out.go:177] * Starting "test-preload-231000" primary control-plane node in "test-preload-231000" cluster
	I0816 10:29:21.344140    4298 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0816 10:29:21.344227    4298 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/test-preload-231000/config.json ...
	I0816 10:29:21.344243    4298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/test-preload-231000/config.json: {Name:mk01aedb973b1b29fd0ae3688cfef11baa9c6628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:29:21.344231    4298 cache.go:107] acquiring lock: {Name:mk86e1e0f0dd0a6c1f029b1a5f8e88f860876b98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344238    4298 cache.go:107] acquiring lock: {Name:mk0c68e4f5dd877e515ddf71cf70db9b69e4b714 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344259    4298 cache.go:107] acquiring lock: {Name:mkcd9274967a3fef1981eb906f70e5ee08aa617f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344425    4298 cache.go:107] acquiring lock: {Name:mkde5b44c7be979dd99f9e877020a09154699a14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344403    4298 cache.go:107] acquiring lock: {Name:mkd9aecac8c894fd49315255f2999df16b3079b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344467    4298 cache.go:107] acquiring lock: {Name:mk22f12d6d01532f5eb92a91f344167d3dab1745 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344496    4298 cache.go:107] acquiring lock: {Name:mk49e5dc6a4b4dbb8b3d6124dcb11a5592215afc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344523    4298 cache.go:107] acquiring lock: {Name:mk95be56d241c6c3d67d766420219b46f813df2a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:29:21.344513    4298 start.go:360] acquireMachinesLock for test-preload-231000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:29:21.344672    4298 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0816 10:29:21.344723    4298 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0816 10:29:21.344741    4298 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0816 10:29:21.344752    4298 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 10:29:21.344754    4298 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0816 10:29:21.344703    4298 start.go:364] duration metric: took 144.875µs to acquireMachinesLock for "test-preload-231000"
	I0816 10:29:21.344826    4298 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:29:21.344843    4298 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:29:21.344943    4298 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:29:21.344857    4298 start.go:93] Provisioning new machine with config: &{Name:test-preload-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:29:21.344996    4298 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:29:21.353136    4298 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:29:21.356830    4298 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0816 10:29:21.356956    4298 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0816 10:29:21.357003    4298 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:29:21.357415    4298 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:29:21.357406    4298 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0816 10:29:21.359680    4298 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0816 10:29:21.359816    4298 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 10:29:21.359816    4298 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:29:21.371569    4298 start.go:159] libmachine.API.Create for "test-preload-231000" (driver="qemu2")
	I0816 10:29:21.371597    4298 client.go:168] LocalClient.Create starting
	I0816 10:29:21.371688    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:29:21.371721    4298 main.go:141] libmachine: Decoding PEM data...
	I0816 10:29:21.371730    4298 main.go:141] libmachine: Parsing certificate...
	I0816 10:29:21.371766    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:29:21.371789    4298 main.go:141] libmachine: Decoding PEM data...
	I0816 10:29:21.371800    4298 main.go:141] libmachine: Parsing certificate...
	I0816 10:29:21.372157    4298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:29:21.524975    4298 main.go:141] libmachine: Creating SSH key...
	I0816 10:29:21.568214    4298 main.go:141] libmachine: Creating Disk image...
	I0816 10:29:21.568232    4298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:29:21.568412    4298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2
	I0816 10:29:21.577855    4298 main.go:141] libmachine: STDOUT: 
	I0816 10:29:21.577874    4298 main.go:141] libmachine: STDERR: 
	I0816 10:29:21.577919    4298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2 +20000M
	I0816 10:29:21.587372    4298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:29:21.587392    4298 main.go:141] libmachine: STDERR: 
	I0816 10:29:21.587403    4298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2
	I0816 10:29:21.587408    4298 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:29:21.587421    4298 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:29:21.587455    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c5:fb:98:08:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2
	I0816 10:29:21.589252    4298 main.go:141] libmachine: STDOUT: 
	I0816 10:29:21.589269    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:29:21.589288    4298 client.go:171] duration metric: took 217.691625ms to LocalClient.Create
	I0816 10:29:21.825653    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0816 10:29:21.826882    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0816 10:29:21.868043    4298 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 10:29:21.868082    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 10:29:21.877702    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0816 10:29:21.881161    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0816 10:29:21.959165    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 10:29:21.998306    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 10:29:22.128823    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0816 10:29:22.128871    4298 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 784.506916ms
	I0816 10:29:22.128911    4298 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0816 10:29:22.219873    4298 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 10:29:22.219969    4298 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 10:29:22.474440    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 10:29:22.474480    4298 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.130273584s
	I0816 10:29:22.474500    4298 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 10:29:23.589478    4298 start.go:128] duration metric: took 2.244495333s to createHost
	I0816 10:29:23.589536    4298 start.go:83] releasing machines lock for "test-preload-231000", held for 2.244760834s
	W0816 10:29:23.589607    4298 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:29:23.599194    4298 out.go:177] * Deleting "test-preload-231000" in qemu2 ...
	I0816 10:29:23.620950    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0816 10:29:23.620990    4298 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.276509625s
	I0816 10:29:23.621005    4298 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	W0816 10:29:23.627141    4298 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:29:23.627171    4298 start.go:729] Will try again in 5 seconds ...
	I0816 10:29:24.294342    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0816 10:29:24.294386    4298 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.950040084s
	I0816 10:29:24.294413    4298 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0816 10:29:26.177154    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0816 10:29:26.177198    4298 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.832848583s
	I0816 10:29:26.177226    4298 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0816 10:29:26.389625    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0816 10:29:26.389679    4298 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.045553667s
	I0816 10:29:26.389704    4298 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0816 10:29:26.526555    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0816 10:29:26.526601    4298 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.182468542s
	I0816 10:29:26.526623    4298 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0816 10:29:28.627219    4298 start.go:360] acquireMachinesLock for test-preload-231000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:29:28.627615    4298 start.go:364] duration metric: took 314.292µs to acquireMachinesLock for "test-preload-231000"
	I0816 10:29:28.627740    4298 start.go:93] Provisioning new machine with config: &{Name:test-preload-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:29:28.627970    4298 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:29:28.637328    4298 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:29:28.690161    4298 start.go:159] libmachine.API.Create for "test-preload-231000" (driver="qemu2")
	I0816 10:29:28.690214    4298 client.go:168] LocalClient.Create starting
	I0816 10:29:28.690349    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:29:28.690413    4298 main.go:141] libmachine: Decoding PEM data...
	I0816 10:29:28.690454    4298 main.go:141] libmachine: Parsing certificate...
	I0816 10:29:28.690517    4298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:29:28.690562    4298 main.go:141] libmachine: Decoding PEM data...
	I0816 10:29:28.690579    4298 main.go:141] libmachine: Parsing certificate...
	I0816 10:29:28.691113    4298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:29:28.847752    4298 main.go:141] libmachine: Creating SSH key...
	I0816 10:29:28.927171    4298 main.go:141] libmachine: Creating Disk image...
	I0816 10:29:28.927177    4298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:29:28.927357    4298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2
	I0816 10:29:28.937145    4298 main.go:141] libmachine: STDOUT: 
	I0816 10:29:28.937163    4298 main.go:141] libmachine: STDERR: 
	I0816 10:29:28.937208    4298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2 +20000M
	I0816 10:29:28.945242    4298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:29:28.945256    4298 main.go:141] libmachine: STDERR: 
	I0816 10:29:28.945266    4298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2
	I0816 10:29:28.945270    4298 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:29:28.945281    4298 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:29:28.945313    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:78:9e:f2:26:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/test-preload-231000/disk.qcow2
	I0816 10:29:28.947036    4298 main.go:141] libmachine: STDOUT: 
	I0816 10:29:28.947055    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:29:28.947068    4298 client.go:171] duration metric: took 256.854458ms to LocalClient.Create
	I0816 10:29:30.211288    4298 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0816 10:29:30.211358    4298 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.867071625s
	I0816 10:29:30.211397    4298 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0816 10:29:30.211452    4298 cache.go:87] Successfully saved all images to host disk.
	I0816 10:29:30.948322    4298 start.go:128] duration metric: took 2.320333417s to createHost
	I0816 10:29:30.948424    4298 start.go:83] releasing machines lock for "test-preload-231000", held for 2.320798042s
	W0816 10:29:30.948815    4298 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-231000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-231000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:29:30.963311    4298 out.go:201] 
	W0816 10:29:30.967452    4298 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:29:30.967478    4298 out.go:270] * 
	* 
	W0816 10:29:30.970203    4298 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:29:30.984289    4298 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-231000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-16 10:29:31.002169 -0700 PDT m=+2520.245038126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-231000 -n test-preload-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-231000 -n test-preload-231000: exit status 7 (66.485834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-231000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-231000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-231000
--- FAIL: TestPreload (9.92s)

                                                
                                    
x
+
TestScheduledStopUnix (10.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-616000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-616000 --memory=2048 --driver=qemu2 : exit status 80 (9.978214083s)

                                                
                                                
-- stdout --
	* [scheduled-stop-616000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-616000" primary control-plane node in "scheduled-stop-616000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-616000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-616000" primary control-plane node in "scheduled-stop-616000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-16 10:29:41.125391 -0700 PDT m=+2530.368476418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-616000 -n scheduled-stop-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-616000 -n scheduled-stop-616000: exit status 7 (67.616416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-616000
--- FAIL: TestScheduledStopUnix (10.13s)

                                                
                                    
x
+
TestSkaffold (13.3s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1591750154 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1591750154 version: (1.066264958s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-919000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-919000 --memory=2600 --driver=qemu2 : exit status 80 (9.773087125s)

                                                
                                                
-- stdout --
	* [skaffold-919000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-919000" primary control-plane node in "skaffold-919000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-919000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-919000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-919000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-919000" primary control-plane node in "skaffold-919000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-919000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-919000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-16 10:29:54.4259 -0700 PDT m=+2543.669269876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-919000 -n skaffold-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-919000 -n skaffold-919000: exit status 7 (64.184291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-919000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-919000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-919000
--- FAIL: TestSkaffold (13.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (599.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1740902394 start -p running-upgrade-260000 --memory=2200 --vm-driver=qemu2 
E0816 10:31:19.025140    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1740902394 start -p running-upgrade-260000 --memory=2200 --vm-driver=qemu2 : (50.893797125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-260000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0816 10:33:25.890586    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-260000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.8043035s)

                                                
                                                
-- stdout --
	* [running-upgrade-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-260000" primary control-plane node in "running-upgrade-260000" cluster
	* Updating the running qemu2 "running-upgrade-260000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:31:28.120875    4989 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:31:28.121007    4989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:31:28.121011    4989 out.go:358] Setting ErrFile to fd 2...
	I0816 10:31:28.121013    4989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:31:28.121149    4989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:31:28.122481    4989 out.go:352] Setting JSON to false
	I0816 10:31:28.139432    4989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3651,"bootTime":1723825837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:31:28.139508    4989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:31:28.144670    4989 out.go:177] * [running-upgrade-260000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:31:28.151706    4989 notify.go:220] Checking for updates...
	I0816 10:31:28.155636    4989 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:31:28.158677    4989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:31:28.161675    4989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:31:28.164721    4989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:31:28.167673    4989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:31:28.170632    4989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:31:28.174012    4989 config.go:182] Loaded profile config "running-upgrade-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:31:28.175700    4989 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 10:31:28.178638    4989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:31:28.182642    4989 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:31:28.187694    4989 start.go:297] selected driver: qemu2
	I0816 10:31:28.187701    4989 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:31:28.187761    4989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:31:28.190288    4989 cni.go:84] Creating CNI manager for ""
	I0816 10:31:28.190308    4989 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:31:28.190345    4989 start.go:340] cluster config:
	{Name:running-upgrade-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:31:28.190390    4989 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:31:28.197626    4989 out.go:177] * Starting "running-upgrade-260000" primary control-plane node in "running-upgrade-260000" cluster
	I0816 10:31:28.201607    4989 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 10:31:28.201631    4989 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0816 10:31:28.201639    4989 cache.go:56] Caching tarball of preloaded images
	I0816 10:31:28.201704    4989 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:31:28.201710    4989 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0816 10:31:28.201768    4989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/config.json ...
	I0816 10:31:28.202234    4989 start.go:360] acquireMachinesLock for running-upgrade-260000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:31:28.202271    4989 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "running-upgrade-260000"
	I0816 10:31:28.202281    4989 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:31:28.202287    4989 fix.go:54] fixHost starting: 
	I0816 10:31:28.202910    4989 fix.go:112] recreateIfNeeded on running-upgrade-260000: state=Running err=<nil>
	W0816 10:31:28.202920    4989 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:31:28.210657    4989 out.go:177] * Updating the running qemu2 "running-upgrade-260000" VM ...
	I0816 10:31:28.222155    4989 machine.go:93] provisionDockerMachine start ...
	I0816 10:31:28.222189    4989 main.go:141] libmachine: Using SSH client type: native
	I0816 10:31:28.222298    4989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050c85a0] 0x1050cae00 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I0816 10:31:28.222302    4989 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:31:28.297620    4989 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-260000
	
	I0816 10:31:28.297636    4989 buildroot.go:166] provisioning hostname "running-upgrade-260000"
	I0816 10:31:28.297685    4989 main.go:141] libmachine: Using SSH client type: native
	I0816 10:31:28.297808    4989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050c85a0] 0x1050cae00 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I0816 10:31:28.297813    4989 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-260000 && echo "running-upgrade-260000" | sudo tee /etc/hostname
	I0816 10:31:28.372507    4989 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-260000
	
	I0816 10:31:28.372562    4989 main.go:141] libmachine: Using SSH client type: native
	I0816 10:31:28.372679    4989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050c85a0] 0x1050cae00 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I0816 10:31:28.372687    4989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-260000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-260000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-260000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:31:28.443731    4989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:31:28.443743    4989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1189/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1189/.minikube}
	I0816 10:31:28.443754    4989 buildroot.go:174] setting up certificates
	I0816 10:31:28.443768    4989 provision.go:84] configureAuth start
	I0816 10:31:28.443773    4989 provision.go:143] copyHostCerts
	I0816 10:31:28.443844    4989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem, removing ...
	I0816 10:31:28.443850    4989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem
	I0816 10:31:28.443966    4989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem (1679 bytes)
	I0816 10:31:28.444159    4989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem, removing ...
	I0816 10:31:28.444162    4989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem
	I0816 10:31:28.444217    4989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem (1082 bytes)
	I0816 10:31:28.444335    4989 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem, removing ...
	I0816 10:31:28.444338    4989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem
	I0816 10:31:28.444395    4989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem (1123 bytes)
	I0816 10:31:28.444489    4989 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-260000 san=[127.0.0.1 localhost minikube running-upgrade-260000]
	I0816 10:31:28.535225    4989 provision.go:177] copyRemoteCerts
	I0816 10:31:28.535255    4989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:31:28.535262    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	I0816 10:31:28.574194    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 10:31:28.581063    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0816 10:31:28.587878    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 10:31:28.594203    4989 provision.go:87] duration metric: took 150.433833ms to configureAuth
	I0816 10:31:28.594214    4989 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:31:28.594325    4989 config.go:182] Loaded profile config "running-upgrade-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:31:28.594354    4989 main.go:141] libmachine: Using SSH client type: native
	I0816 10:31:28.594438    4989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050c85a0] 0x1050cae00 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I0816 10:31:28.594442    4989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:31:28.667409    4989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:31:28.667418    4989 buildroot.go:70] root file system type: tmpfs
	I0816 10:31:28.667482    4989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:31:28.667535    4989 main.go:141] libmachine: Using SSH client type: native
	I0816 10:31:28.667651    4989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050c85a0] 0x1050cae00 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I0816 10:31:28.667685    4989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:31:28.744276    4989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:31:28.744333    4989 main.go:141] libmachine: Using SSH client type: native
	I0816 10:31:28.744446    4989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050c85a0] 0x1050cae00 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I0816 10:31:28.744455    4989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:31:28.815865    4989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:31:28.815875    4989 machine.go:96] duration metric: took 593.727625ms to provisionDockerMachine
	I0816 10:31:28.815880    4989 start.go:293] postStartSetup for "running-upgrade-260000" (driver="qemu2")
	I0816 10:31:28.815886    4989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:31:28.815936    4989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:31:28.815945    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	I0816 10:31:28.855612    4989 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:31:28.857081    4989 info.go:137] Remote host: Buildroot 2021.02.12
	I0816 10:31:28.857088    4989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1189/.minikube/addons for local assets ...
	I0816 10:31:28.857182    4989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1189/.minikube/files for local assets ...
	I0816 10:31:28.857310    4989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem -> 20542.pem in /etc/ssl/certs
	I0816 10:31:28.857439    4989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:31:28.860182    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem --> /etc/ssl/certs/20542.pem (1708 bytes)
	I0816 10:31:28.867304    4989 start.go:296] duration metric: took 51.419625ms for postStartSetup
	I0816 10:31:28.867315    4989 fix.go:56] duration metric: took 665.044583ms for fixHost
	I0816 10:31:28.867356    4989 main.go:141] libmachine: Using SSH client type: native
	I0816 10:31:28.867460    4989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050c85a0] 0x1050cae00 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I0816 10:31:28.867468    4989 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:31:28.937449    4989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829488.994424055
	
	I0816 10:31:28.937456    4989 fix.go:216] guest clock: 1723829488.994424055
	I0816 10:31:28.937460    4989 fix.go:229] Guest: 2024-08-16 10:31:28.994424055 -0700 PDT Remote: 2024-08-16 10:31:28.867317 -0700 PDT m=+0.767613168 (delta=127.107055ms)
	I0816 10:31:28.937470    4989 fix.go:200] guest clock delta is within tolerance: 127.107055ms
	I0816 10:31:28.937473    4989 start.go:83] releasing machines lock for "running-upgrade-260000", held for 735.213084ms
	I0816 10:31:28.937526    4989 ssh_runner.go:195] Run: cat /version.json
	I0816 10:31:28.937533    4989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:31:28.937536    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	I0816 10:31:28.937549    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	W0816 10:31:28.938147    4989 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50368->127.0.0.1:50260: write: broken pipe
	I0816 10:31:28.938166    4989 retry.go:31] will retry after 220.670312ms: ssh: handshake failed: write tcp 127.0.0.1:50368->127.0.0.1:50260: write: broken pipe
	W0816 10:31:29.198535    4989 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0816 10:31:29.198608    4989 ssh_runner.go:195] Run: systemctl --version
	I0816 10:31:29.200422    4989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:31:29.202286    4989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:31:29.202313    4989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0816 10:31:29.205066    4989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0816 10:31:29.209330    4989 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:31:29.209339    4989 start.go:495] detecting cgroup driver to use...
	I0816 10:31:29.209400    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:31:29.214746    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0816 10:31:29.217925    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:31:29.220674    4989 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:31:29.220697    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:31:29.223867    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:31:29.227268    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:31:29.230908    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:31:29.233892    4989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:31:29.236747    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:31:29.239944    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:31:29.243461    4989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:31:29.246829    4989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:31:29.249459    4989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:31:29.252166    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:31:29.351003    4989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:31:29.357517    4989 start.go:495] detecting cgroup driver to use...
	I0816 10:31:29.357563    4989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:31:29.364624    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:31:29.369678    4989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:31:29.376501    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:31:29.381287    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:31:29.385544    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:31:29.391024    4989 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:31:29.392290    4989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:31:29.394852    4989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0816 10:31:29.400081    4989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:31:29.492385    4989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:31:29.581483    4989 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:31:29.581542    4989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:31:29.586853    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:31:29.682753    4989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:31:43.227425    4989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.544943917s)
	I0816 10:31:43.227506    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:31:43.233225    4989 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:31:43.242557    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:31:43.247501    4989 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:31:43.332064    4989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:31:43.412397    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:31:43.494316    4989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:31:43.501712    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:31:43.506339    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:31:43.588678    4989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:31:43.629483    4989 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:31:43.629551    4989 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:31:43.631504    4989 start.go:563] Will wait 60s for crictl version
	I0816 10:31:43.631540    4989 ssh_runner.go:195] Run: which crictl
	I0816 10:31:43.633113    4989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:31:43.645785    4989 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0816 10:31:43.645850    4989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:31:43.658672    4989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:31:43.675035    4989 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0816 10:31:43.675103    4989 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0816 10:31:43.676403    4989 kubeadm.go:883] updating cluster {Name:running-upgrade-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0816 10:31:43.676447    4989 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 10:31:43.676483    4989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:31:43.686566    4989 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:31:43.686574    4989 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 10:31:43.686620    4989 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:31:43.689535    4989 ssh_runner.go:195] Run: which lz4
	I0816 10:31:43.690830    4989 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:31:43.692002    4989 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:31:43.692012    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0816 10:31:44.622254    4989 docker.go:649] duration metric: took 931.472916ms to copy over tarball
	I0816 10:31:44.622308    4989 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:31:45.760291    4989 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.137993417s)
	I0816 10:31:45.760306    4989 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:31:45.776201    4989 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:31:45.779340    4989 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0816 10:31:45.783981    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:31:45.867748    4989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:31:46.062198    4989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:31:46.073537    4989 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:31:46.073545    4989 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 10:31:46.073550    4989 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 10:31:46.077934    4989 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:31:46.080345    4989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:31:46.083021    4989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:31:46.083463    4989 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:31:46.085614    4989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:31:46.085661    4989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:31:46.086634    4989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:31:46.086722    4989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:31:46.087920    4989 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:31:46.087935    4989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:31:46.089400    4989 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 10:31:46.089400    4989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:31:46.091166    4989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:31:46.091973    4989 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:31:46.092223    4989 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 10:31:46.093795    4989 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:31:46.516769    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:31:46.523379    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:31:46.531368    4989 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0816 10:31:46.531399    4989 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:31:46.531461    4989 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:31:46.547159    4989 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0816 10:31:46.547179    4989 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:31:46.547239    4989 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:31:46.547709    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0816 10:31:46.556911    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:31:46.558316    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0816 10:31:46.563459    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0816 10:31:46.567932    4989 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0816 10:31:46.567953    4989 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:31:46.567996    4989 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:31:46.569525    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:31:46.580862    4989 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0816 10:31:46.580882    4989 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:31:46.580925    4989 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0816 10:31:46.584144    4989 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 10:31:46.584247    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:31:46.591995    4989 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0816 10:31:46.592019    4989 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:31:46.592071    4989 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:31:46.592146    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0816 10:31:46.598301    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 10:31:46.598400    4989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0816 10:31:46.600716    4989 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0816 10:31:46.600736    4989 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:31:46.600774    4989 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:31:46.610378    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0816 10:31:46.610409    4989 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0816 10:31:46.610423    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0816 10:31:46.610437    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 10:31:46.610535    4989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0816 10:31:46.611880    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0816 10:31:46.612198    4989 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0816 10:31:46.612210    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0816 10:31:46.646814    4989 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0816 10:31:46.646836    4989 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0816 10:31:46.646889    4989 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0816 10:31:46.679192    4989 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 10:31:46.679305    4989 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:31:46.692610    4989 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0816 10:31:46.692627    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0816 10:31:46.705814    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 10:31:46.705936    4989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0816 10:31:46.722511    4989 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0816 10:31:46.722537    4989 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:31:46.722589    4989 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:31:46.803773    4989 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0816 10:31:46.803793    4989 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0816 10:31:46.803825    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0816 10:31:46.827395    4989 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0816 10:31:46.827408    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0816 10:31:47.992433    4989 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.269829583s)
	I0816 10:31:47.992479    4989 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 10:31:47.992499    4989 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load": (1.165097458s)
	I0816 10:31:47.992515    4989 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0816 10:31:47.992563    4989 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0816 10:31:47.992581    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0816 10:31:47.992939    4989 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 10:31:48.158374    4989 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0816 10:31:48.158405    4989 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 10:31:48.158433    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0816 10:31:48.187277    4989 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 10:31:48.187290    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0816 10:31:48.432055    4989 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 10:31:48.432097    4989 cache_images.go:92] duration metric: took 2.358590708s to LoadCachedImages
	W0816 10:31:48.432131    4989 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0816 10:31:48.432137    4989 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0816 10:31:48.432203    4989 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-260000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:31:48.432268    4989 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:31:48.455379    4989 cni.go:84] Creating CNI manager for ""
	I0816 10:31:48.455390    4989 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:31:48.455395    4989 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:31:48.455408    4989 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-260000 NodeName:running-upgrade-260000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:31:48.455738    4989 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-260000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:31:48.455814    4989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0816 10:31:48.459711    4989 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:31:48.459750    4989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 10:31:48.462400    4989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0816 10:31:48.467203    4989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:31:48.472170    4989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0816 10:31:48.477553    4989 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0816 10:31:48.479002    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:31:48.562150    4989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:31:48.567306    4989 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000 for IP: 10.0.2.15
	I0816 10:31:48.567313    4989 certs.go:194] generating shared ca certs ...
	I0816 10:31:48.567329    4989 certs.go:226] acquiring lock for ca certs: {Name:mkd0f48b500cbb75fb3e9a7c625fdb17e399313f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:31:48.567474    4989 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.key
	I0816 10:31:48.567525    4989 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.key
	I0816 10:31:48.567530    4989 certs.go:256] generating profile certs ...
	I0816 10:31:48.567587    4989 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/client.key
	I0816 10:31:48.567605    4989 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.key.7d7dfbf9
	I0816 10:31:48.567617    4989 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.crt.7d7dfbf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0816 10:31:48.725586    4989 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.crt.7d7dfbf9 ...
	I0816 10:31:48.725597    4989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.crt.7d7dfbf9: {Name:mke82552b1b3e179e2a54d59423ced8ff40e2c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:31:48.725879    4989 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.key.7d7dfbf9 ...
	I0816 10:31:48.725884    4989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.key.7d7dfbf9: {Name:mk683d1e325b1389f1f1978f1f2f03318750b839 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:31:48.726028    4989 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.crt.7d7dfbf9 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.crt
	I0816 10:31:48.726168    4989 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.key.7d7dfbf9 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.key
	I0816 10:31:48.726322    4989 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/proxy-client.key
	I0816 10:31:48.726457    4989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054.pem (1338 bytes)
	W0816 10:31:48.726486    4989 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054_empty.pem, impossibly tiny 0 bytes
	I0816 10:31:48.726491    4989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:31:48.726510    4989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem (1082 bytes)
	I0816 10:31:48.726528    4989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:31:48.726548    4989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem (1679 bytes)
	I0816 10:31:48.726590    4989 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem (1708 bytes)
	I0816 10:31:48.726919    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:31:48.734269    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 10:31:48.741329    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:31:48.749178    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 10:31:48.756595    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 10:31:48.763777    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 10:31:48.771183    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:31:48.778232    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 10:31:48.784939    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054.pem --> /usr/share/ca-certificates/2054.pem (1338 bytes)
	I0816 10:31:48.792475    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem --> /usr/share/ca-certificates/20542.pem (1708 bytes)
	I0816 10:31:48.799782    4989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:31:48.806410    4989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:31:48.811426    4989 ssh_runner.go:195] Run: openssl version
	I0816 10:31:48.813528    4989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20542.pem && ln -fs /usr/share/ca-certificates/20542.pem /etc/ssl/certs/20542.pem"
	I0816 10:31:48.817118    4989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20542.pem
	I0816 10:31:48.818686    4989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:55 /usr/share/ca-certificates/20542.pem
	I0816 10:31:48.818706    4989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20542.pem
	I0816 10:31:48.820471    4989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:31:48.823185    4989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:31:48.826082    4989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:31:48.827656    4989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:31:48.827680    4989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:31:48.829479    4989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:31:48.832736    4989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2054.pem && ln -fs /usr/share/ca-certificates/2054.pem /etc/ssl/certs/2054.pem"
	I0816 10:31:48.835683    4989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2054.pem
	I0816 10:31:48.837144    4989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:55 /usr/share/ca-certificates/2054.pem
	I0816 10:31:48.837165    4989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2054.pem
	I0816 10:31:48.839056    4989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2054.pem /etc/ssl/certs/51391683.0"
	I0816 10:31:48.841785    4989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:31:48.843345    4989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:31:48.845120    4989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:31:48.847033    4989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:31:48.848876    4989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:31:48.851121    4989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:31:48.852800    4989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:31:48.854744    4989 kubeadm.go:392] StartCluster: {Name:running-upgrade-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:31:48.854804    4989 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:31:48.870414    4989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:31:48.873954    4989 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 10:31:48.873959    4989 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 10:31:48.873989    4989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 10:31:48.877078    4989 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:31:48.877333    4989 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-260000" does not appear in /Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:31:48.877381    4989 kubeconfig.go:62] /Users/jenkins/minikube-integration/19461-1189/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-260000" cluster setting kubeconfig missing "running-upgrade-260000" context setting]
	I0816 10:31:48.877515    4989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/kubeconfig: {Name:mk2e4f2b039616ddb85ed20d74e703a928518229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:31:48.878656    4989 kapi.go:59] client config for running-upgrade-260000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106681610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:31:48.878974    4989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 10:31:48.881940    4989 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-260000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0816 10:31:48.881946    4989 kubeadm.go:1160] stopping kube-system containers ...
	I0816 10:31:48.881985    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:31:48.893103    4989 docker.go:483] Stopping containers: [697b7c1cf4e9 c2b17a5f6e56 1a8cf727046f c23e920464d9 a7a83a83ddc9 1cf502de6722 a591fea74861 89d8f79ac392 359ce0ff7bb4 74067d4f196b 50b41063614c fdc55d3a03be 3f563ea41eb8 f77ae7f2dad3]
	I0816 10:31:48.893170    4989 ssh_runner.go:195] Run: docker stop 697b7c1cf4e9 c2b17a5f6e56 1a8cf727046f c23e920464d9 a7a83a83ddc9 1cf502de6722 a591fea74861 89d8f79ac392 359ce0ff7bb4 74067d4f196b 50b41063614c fdc55d3a03be 3f563ea41eb8 f77ae7f2dad3
	I0816 10:31:48.904385    4989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 10:31:49.003116    4989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:31:49.007593    4989 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 17:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 16 17:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 16 17:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Aug 16 17:31 /etc/kubernetes/scheduler.conf
	
	I0816 10:31:49.007627    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf
	I0816 10:31:49.010831    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:31:49.010861    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:31:49.014063    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf
	I0816 10:31:49.017386    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:31:49.017407    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:31:49.020872    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf
	I0816 10:31:49.023990    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:31:49.024020    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:31:49.026820    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf
	I0816 10:31:49.029616    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:31:49.029634    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:31:49.032626    4989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:31:49.035479    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:31:49.056830    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:31:49.491550    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:31:49.824756    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:31:49.848117    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:31:49.872413    4989 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:31:49.872493    4989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:31:50.374655    4989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:31:50.874595    4989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:31:50.879201    4989 api_server.go:72] duration metric: took 1.006811875s to wait for apiserver process to appear ...
	I0816 10:31:50.879210    4989 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:31:50.879224    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:31:55.881325    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:31:55.881355    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:00.881776    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:00.881858    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:05.882773    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:05.882796    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:10.883516    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:10.883592    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:15.884937    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:15.885003    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:20.886841    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:20.886901    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:25.887443    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:25.887511    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:30.889838    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:30.889902    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:35.890611    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:35.890699    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:40.893309    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:40.893372    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:45.895849    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:45.895913    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:50.898340    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:50.898522    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:32:50.910374    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:32:50.910456    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:32:50.925578    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:32:50.925662    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:32:50.936120    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:32:50.936192    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:32:50.950520    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:32:50.950596    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:32:50.960713    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:32:50.960782    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:32:50.970828    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:32:50.970893    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:32:50.980756    4989 logs.go:276] 0 containers: []
	W0816 10:32:50.980770    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:32:50.980828    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:32:50.990826    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:32:50.990845    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:32:50.990850    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:32:51.005172    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:32:51.005182    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:32:51.020633    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:32:51.020642    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:32:51.032628    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:32:51.032642    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:32:51.049964    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:32:51.049975    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:32:51.065415    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:32:51.065428    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:32:51.084413    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:32:51.084424    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:32:51.095892    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:32:51.095910    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:32:51.107031    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:32:51.107040    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:32:51.111985    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:32:51.111993    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:32:51.147356    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:32:51.147367    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:32:51.161890    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:32:51.161898    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:32:51.176432    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:32:51.176444    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:32:51.188288    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:32:51.188301    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:32:51.213421    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:32:51.213430    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:32:51.254948    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:32:51.254957    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:32:51.324712    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:32:51.324726    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:32:53.838227    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:32:58.840798    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:32:58.841153    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:32:58.870630    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:32:58.870751    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:32:58.889178    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:32:58.889268    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:32:58.902709    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:32:58.902777    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:32:58.914814    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:32:58.914876    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:32:58.929536    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:32:58.929603    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:32:58.939843    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:32:58.939916    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:32:58.950223    4989 logs.go:276] 0 containers: []
	W0816 10:32:58.950235    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:32:58.950294    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:32:58.960587    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:32:58.960607    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:32:58.960613    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:32:58.978811    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:32:58.978823    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:32:58.996181    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:32:58.996193    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:32:59.000440    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:32:59.000445    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:32:59.014287    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:32:59.014297    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:32:59.025821    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:32:59.025836    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:32:59.039336    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:32:59.039351    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:32:59.049935    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:32:59.049944    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:32:59.061788    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:32:59.061799    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:32:59.095144    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:32:59.095156    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:32:59.117146    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:32:59.117155    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:32:59.128484    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:32:59.128495    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:32:59.154458    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:32:59.154468    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:32:59.195523    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:32:59.195534    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:32:59.231597    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:32:59.231610    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:32:59.246419    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:32:59.246429    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:32:59.257386    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:32:59.257398    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:01.770962    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:33:06.773473    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:33:06.773947    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:33:06.817750    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:33:06.817883    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:33:06.841023    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:33:06.841134    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:33:06.856230    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:33:06.856296    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:33:06.872699    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:33:06.872774    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:33:06.883125    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:33:06.883201    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:33:06.894572    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:33:06.894647    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:33:06.907386    4989 logs.go:276] 0 containers: []
	W0816 10:33:06.907402    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:33:06.907459    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:33:06.917920    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:33:06.917939    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:33:06.917945    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:33:06.930035    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:33:06.930047    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:33:06.940945    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:33:06.940958    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:33:06.980192    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:33:06.980202    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:33:06.994107    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:33:06.994116    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:33:07.005530    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:33:07.005539    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:33:07.019179    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:33:07.019191    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:33:07.053373    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:33:07.053383    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:33:07.064988    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:33:07.065002    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:33:07.100795    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:33:07.100806    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:33:07.117790    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:33:07.117800    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:33:07.143892    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:33:07.143899    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:07.155832    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:33:07.155846    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:33:07.171229    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:33:07.171240    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:33:07.188813    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:33:07.188824    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:33:07.193103    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:33:07.193109    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:33:07.207098    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:33:07.207108    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:33:09.724428    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:33:14.727066    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:33:14.727424    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:33:14.767714    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:33:14.767832    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:33:14.783625    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:33:14.783700    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:33:14.801462    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:33:14.801539    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:33:14.813079    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:33:14.813144    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:33:14.823824    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:33:14.823883    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:33:14.834208    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:33:14.834280    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:33:14.843792    4989 logs.go:276] 0 containers: []
	W0816 10:33:14.843805    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:33:14.843860    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:33:14.854426    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:33:14.854441    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:33:14.854445    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:33:14.879662    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:33:14.879672    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:33:14.893211    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:33:14.893222    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:33:14.907504    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:33:14.907513    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:33:14.945945    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:33:14.945959    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:33:14.959241    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:33:14.959253    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:33:14.981359    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:33:14.981374    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:33:15.022915    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:33:15.022924    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:33:15.027620    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:33:15.027626    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:33:15.041663    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:33:15.041675    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:15.052747    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:33:15.052760    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:33:15.064078    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:33:15.064091    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:33:15.077597    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:33:15.077608    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:33:15.111369    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:33:15.111382    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:33:15.131868    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:33:15.131878    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:33:15.143572    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:33:15.143583    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:33:15.155129    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:33:15.155142    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:33:17.673735    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:33:22.676400    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:33:22.676887    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:33:22.717844    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:33:22.717982    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:33:22.739928    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:33:22.740043    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:33:22.754821    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:33:22.754893    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:33:22.767291    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:33:22.767367    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:33:22.778452    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:33:22.778511    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:33:22.789337    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:33:22.789408    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:33:22.799509    4989 logs.go:276] 0 containers: []
	W0816 10:33:22.799524    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:33:22.799580    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:33:22.812495    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:33:22.812511    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:33:22.812516    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:33:22.829906    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:33:22.829919    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:33:22.869803    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:33:22.869810    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:33:22.883916    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:33:22.883927    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:33:22.895859    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:33:22.895872    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:33:22.900613    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:33:22.900623    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:33:22.935374    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:33:22.935388    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:33:22.947227    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:33:22.947240    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:33:22.973250    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:33:22.973258    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:22.985298    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:33:22.985312    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:33:23.005909    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:33:23.005919    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:33:23.018313    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:33:23.018324    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:33:23.029243    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:33:23.029255    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:33:23.046353    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:33:23.046368    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:33:23.057770    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:33:23.057782    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:33:23.091054    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:33:23.091068    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:33:23.105470    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:33:23.105484    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:33:25.622234    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:33:30.624851    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:33:30.625190    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:33:30.655132    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:33:30.655253    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:33:30.673669    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:33:30.673753    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:33:30.687597    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:33:30.687675    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:33:30.699216    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:33:30.699284    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:33:30.710256    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:33:30.710325    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:33:30.720484    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:33:30.720551    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:33:30.733345    4989 logs.go:276] 0 containers: []
	W0816 10:33:30.733357    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:33:30.733414    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:33:30.746713    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:33:30.746732    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:33:30.746750    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:33:30.782118    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:33:30.782127    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:33:30.795580    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:33:30.795589    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:33:30.830068    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:33:30.830078    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:33:30.845860    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:33:30.845870    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:33:30.862607    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:33:30.862616    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:33:30.902381    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:33:30.902391    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:33:30.926507    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:33:30.926519    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:33:30.941154    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:33:30.941165    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:33:30.952366    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:33:30.952381    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:33:30.963938    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:33:30.963951    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:33:30.985709    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:33:30.985721    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:33:31.000903    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:33:31.000913    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:33:31.014688    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:33:31.014700    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:31.027680    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:33:31.027694    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:33:31.042141    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:33:31.042150    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:33:31.054204    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:33:31.054217    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:33:33.560718    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:33:38.563400    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:33:38.563876    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:33:38.605916    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:33:38.606053    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:33:38.628368    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:33:38.628504    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:33:38.644048    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:33:38.644128    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:33:38.656518    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:33:38.656588    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:33:38.667391    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:33:38.667455    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:33:38.677991    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:33:38.678054    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:33:38.688355    4989 logs.go:276] 0 containers: []
	W0816 10:33:38.688373    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:33:38.688435    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:33:38.699432    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:33:38.699447    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:33:38.699452    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:33:38.721853    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:33:38.721861    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:33:38.733858    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:33:38.733869    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:33:38.751327    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:33:38.751337    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:33:38.790776    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:33:38.790784    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:33:38.795070    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:33:38.795079    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:33:38.833258    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:33:38.833268    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:33:38.869746    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:33:38.869756    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:33:38.883264    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:33:38.883276    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:33:38.898352    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:33:38.898362    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:38.909966    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:33:38.909978    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:33:38.933709    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:33:38.933718    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:33:38.944838    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:33:38.944849    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:33:38.958653    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:33:38.958662    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:33:38.970179    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:33:38.970193    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:33:38.981012    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:33:38.981025    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:33:38.994607    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:33:38.994619    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:33:41.508505    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:33:46.509511    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:33:46.509668    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:33:46.530751    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:33:46.530842    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:33:46.546499    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:33:46.546584    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:33:46.558978    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:33:46.559040    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:33:46.569626    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:33:46.569684    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:33:46.579963    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:33:46.580018    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:33:46.590743    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:33:46.590797    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:33:46.600978    4989 logs.go:276] 0 containers: []
	W0816 10:33:46.600991    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:33:46.601049    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:33:46.610893    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:33:46.610909    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:33:46.610916    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:33:46.647894    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:33:46.647905    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:33:46.661773    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:33:46.661783    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:33:46.678824    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:33:46.678836    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:33:46.692790    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:33:46.692801    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:33:46.704259    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:33:46.704273    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:33:46.744177    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:33:46.744186    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:33:46.778004    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:33:46.778015    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:33:46.791731    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:33:46.791742    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:33:46.806505    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:33:46.806514    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:33:46.821058    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:33:46.821067    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:33:46.834456    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:33:46.834469    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:33:46.839366    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:33:46.839373    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:33:46.855114    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:33:46.855124    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:33:46.866642    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:33:46.866655    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:33:46.878161    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:33:46.878170    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:33:46.904484    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:33:46.904494    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:49.417516    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:33:54.419630    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:33:54.420078    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:33:54.460534    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:33:54.460661    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:33:54.482281    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:33:54.482392    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:33:54.497601    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:33:54.497686    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:33:54.510010    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:33:54.510075    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:33:54.520850    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:33:54.520910    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:33:54.531781    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:33:54.531844    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:33:54.542692    4989 logs.go:276] 0 containers: []
	W0816 10:33:54.542702    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:33:54.542751    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:33:54.552939    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:33:54.552959    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:33:54.552965    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:33:54.571041    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:33:54.571053    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:33:54.595820    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:33:54.595828    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:33:54.608165    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:33:54.608175    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:33:54.623774    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:33:54.623786    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:33:54.636693    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:33:54.636704    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:33:54.675327    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:33:54.675338    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:33:54.690453    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:33:54.690467    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:33:54.705168    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:33:54.705182    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:33:54.718673    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:33:54.718684    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:33:54.742206    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:33:54.742217    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:33:54.757039    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:33:54.757049    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:33:54.796404    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:33:54.796416    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:33:54.800905    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:33:54.800914    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:33:54.826394    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:33:54.826406    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:33:54.838108    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:33:54.838120    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:33:54.870619    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:33:54.870632    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:33:57.393545    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:02.395754    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:02.395949    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:02.412898    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:02.412987    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:02.426446    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:02.426520    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:02.438849    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:02.438922    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:02.451175    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:02.451252    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:02.461513    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:02.461574    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:02.473166    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:02.473233    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:02.487834    4989 logs.go:276] 0 containers: []
	W0816 10:34:02.487846    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:02.487906    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:02.502477    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:02.502495    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:02.502502    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:02.519437    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:02.519449    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:02.556129    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:02.556141    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:02.572929    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:02.572940    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:02.584076    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:02.584088    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:02.599640    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:02.599650    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:02.604298    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:02.604305    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:02.618280    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:02.618290    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:02.630484    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:02.630496    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:02.645323    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:02.645334    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:02.688428    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:02.688438    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:02.724061    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:02.724073    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:02.736003    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:02.736012    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:02.760751    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:02.760761    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:02.774321    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:02.774333    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:02.786877    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:02.786887    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:02.803923    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:02.803935    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:05.317858    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:10.320037    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:10.320310    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:10.346741    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:10.346868    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:10.364323    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:10.364401    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:10.377277    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:10.377341    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:10.389114    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:10.389176    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:10.405985    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:10.406050    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:10.416721    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:10.416778    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:10.427062    4989 logs.go:276] 0 containers: []
	W0816 10:34:10.427076    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:10.427133    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:10.438113    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:10.438128    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:10.438136    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:10.484881    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:10.484892    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:10.520328    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:10.520342    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:10.532349    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:10.532359    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:10.544039    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:10.544050    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:10.548450    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:10.548458    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:10.567541    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:10.567552    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:10.601663    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:10.601675    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:10.613644    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:10.613655    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:10.632726    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:10.632735    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:10.658130    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:10.658136    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:10.674321    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:10.674331    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:10.688276    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:10.688288    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:10.704012    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:10.704022    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:10.718883    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:10.718893    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:10.730685    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:10.730697    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:10.742529    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:10.742543    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:13.258802    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:18.259996    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:18.260106    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:18.272583    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:18.272656    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:18.283945    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:18.284016    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:18.299693    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:18.299764    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:18.311411    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:18.311493    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:18.325626    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:18.325699    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:18.337798    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:18.337872    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:18.349108    4989 logs.go:276] 0 containers: []
	W0816 10:34:18.349121    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:18.349182    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:18.364957    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:18.364978    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:18.364983    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:18.378442    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:18.378455    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:18.394318    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:18.394331    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:18.407854    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:18.407866    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:18.424521    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:18.424533    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:18.468473    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:18.468489    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:18.510645    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:18.510660    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:18.526409    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:18.526421    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:18.542563    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:18.542576    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:18.547928    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:18.547941    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:18.561295    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:18.561310    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:18.574333    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:18.574344    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:18.588084    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:18.588097    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:18.615027    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:18.615045    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:18.630344    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:18.630357    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:18.666989    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:18.667008    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:18.682218    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:18.682235    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:21.204064    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:26.205403    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:26.205503    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:26.216066    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:26.216133    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:26.226719    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:26.226801    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:26.237699    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:26.237767    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:26.248536    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:26.248605    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:26.260488    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:26.260573    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:26.272385    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:26.272462    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:26.282451    4989 logs.go:276] 0 containers: []
	W0816 10:34:26.282483    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:26.282549    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:26.293932    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:26.293951    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:26.293956    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:26.334805    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:26.334814    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:26.339583    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:26.339591    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:26.354276    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:26.354290    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:26.368904    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:26.368914    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:26.380431    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:26.380445    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:26.395048    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:26.395062    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:26.406648    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:26.406657    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:26.420147    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:26.420162    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:26.432334    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:26.432346    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:26.444262    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:26.444275    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:26.455860    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:26.455871    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:26.491277    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:26.491287    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:26.526672    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:26.526685    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:26.540817    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:26.540830    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:26.551585    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:26.551595    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:26.572903    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:26.572918    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:29.100601    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:34.102986    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:34.103093    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:34.115185    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:34.115259    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:34.130103    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:34.130180    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:34.140682    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:34.140746    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:34.151200    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:34.151278    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:34.162537    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:34.162606    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:34.173421    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:34.173491    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:34.183583    4989 logs.go:276] 0 containers: []
	W0816 10:34:34.183597    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:34.183662    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:34.195779    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:34.195801    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:34.195808    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:34.231297    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:34.231310    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:34.243470    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:34.243483    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:34.261994    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:34.262004    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:34.303727    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:34.303748    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:34.318967    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:34.318978    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:34.355767    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:34.355778    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:34.367919    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:34.367930    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:34.383078    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:34.383090    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:34.396839    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:34.396854    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:34.413737    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:34.413748    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:34.431152    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:34.431169    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:34.447330    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:34.447343    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:34.459972    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:34.459985    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:34.472582    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:34.472595    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:34.498564    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:34.498586    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:34.503436    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:34.503448    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:37.022452    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:42.024977    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:42.025092    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:42.040328    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:42.040408    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:42.051215    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:42.051277    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:42.064527    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:42.064592    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:42.074668    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:42.074728    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:42.085178    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:42.085255    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:42.095733    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:42.095793    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:42.106296    4989 logs.go:276] 0 containers: []
	W0816 10:34:42.106308    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:42.106365    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:42.116564    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:42.116583    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:42.116588    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:42.151727    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:42.151739    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:42.165820    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:42.165830    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:42.176910    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:42.176924    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:42.191893    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:42.191902    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:42.213253    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:42.213263    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:42.217629    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:42.217635    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:42.231303    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:42.231316    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:42.243373    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:42.243386    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:42.255287    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:42.255297    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:42.269554    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:42.269563    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:42.294259    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:42.294269    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:42.328562    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:42.328574    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:42.339833    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:42.339849    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:42.351393    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:42.351402    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:42.365450    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:42.365460    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:42.382961    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:42.382970    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:44.927215    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:49.929385    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:49.929484    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:49.939980    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:49.940058    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:49.953162    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:49.953233    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:49.963537    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:49.963608    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:49.973853    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:49.973927    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:49.985202    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:49.985276    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:49.997725    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:49.997794    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:50.008058    4989 logs.go:276] 0 containers: []
	W0816 10:34:50.008070    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:50.008129    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:50.018260    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:50.018278    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:50.018284    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:50.051918    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:50.051931    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:50.065521    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:50.065534    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:50.077440    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:50.077451    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:50.094699    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:50.094708    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:50.109129    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:50.109141    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:50.113709    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:50.113718    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:50.128760    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:50.128772    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:50.144182    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:50.144191    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:50.168358    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:50.168365    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:50.180281    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:50.180294    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:50.194454    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:50.194464    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:50.205191    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:50.205201    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:50.218873    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:50.218885    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:50.230926    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:50.230936    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:50.244593    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:50.244601    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:50.286615    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:50.286624    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:52.823480    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:57.826112    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:57.826269    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:57.838102    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:57.838181    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:57.849109    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:57.849183    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:57.863285    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:57.863355    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:57.874088    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:57.874160    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:57.884938    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:57.885009    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:57.896472    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:57.896536    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:57.907477    4989 logs.go:276] 0 containers: []
	W0816 10:34:57.907490    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:57.907550    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:57.918427    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:57.918444    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:57.918452    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:57.936493    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:57.936504    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:57.948274    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:57.948288    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:57.968960    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:57.968972    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:57.980722    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:57.980736    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:58.005146    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:58.005154    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:58.040850    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:58.040861    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:58.055618    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:58.055628    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:58.068854    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:58.068868    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:58.083581    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:58.083595    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:58.088121    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:58.088130    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:58.123947    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:58.123957    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:58.135886    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:58.135896    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:58.147151    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:58.147164    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:58.159305    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:58.159315    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:58.202010    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:58.202020    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:58.216875    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:58.216887    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:00.733712    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:05.736541    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:05.736958    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:05.778343    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:05.778445    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:05.797921    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:05.797993    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:05.810570    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:05.810650    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:05.821466    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:05.821536    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:05.833883    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:05.833945    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:05.844109    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:05.844178    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:05.854475    4989 logs.go:276] 0 containers: []
	W0816 10:35:05.854497    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:05.854559    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:05.873464    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:05.873480    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:05.873485    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:05.909586    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:05.909600    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:05.926072    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:05.926083    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:05.943888    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:05.943899    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:05.967864    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:05.967872    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:05.981875    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:05.981886    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:06.002423    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:06.002433    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:06.017001    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:06.017014    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:06.028346    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:06.028358    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:06.062976    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:06.062989    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:06.074838    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:06.074848    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:06.086395    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:06.086405    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:06.098277    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:06.098287    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:06.110718    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:06.110729    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:06.153039    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:06.153048    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:06.157369    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:06.157378    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:06.170837    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:06.170851    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:08.686931    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:13.688977    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:13.689058    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:13.701468    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:13.701535    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:13.712082    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:13.712148    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:13.722579    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:13.722641    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:13.733856    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:13.733930    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:13.743939    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:13.744000    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:13.757985    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:13.758058    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:13.768414    4989 logs.go:276] 0 containers: []
	W0816 10:35:13.768425    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:13.768479    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:13.779348    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:13.779370    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:13.779376    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:13.812591    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:13.812605    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:13.824376    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:13.824388    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:13.839214    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:13.839227    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:13.851091    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:13.851103    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:13.879714    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:13.879724    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:13.893730    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:13.893739    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:13.906279    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:13.906289    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:13.921107    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:13.921118    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:13.964441    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:13.964455    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:13.970436    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:13.970449    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:13.992205    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:13.992220    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:14.036964    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:14.036979    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:14.049301    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:14.049313    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:14.061356    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:14.061368    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:14.073487    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:14.073498    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:14.096192    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:14.096200    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:16.612126    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:21.614267    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:21.614374    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:21.629121    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:21.629197    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:21.640007    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:21.640079    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:21.652993    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:21.653067    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:21.663754    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:21.663831    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:21.678716    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:21.678791    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:21.689849    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:21.689919    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:21.700431    4989 logs.go:276] 0 containers: []
	W0816 10:35:21.700443    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:21.700509    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:21.711059    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:21.711077    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:21.711083    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:21.723321    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:21.723334    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:21.737958    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:21.737969    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:21.750782    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:21.750793    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:21.775469    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:21.775480    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:21.794125    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:21.794137    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:21.807398    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:21.807410    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:21.850799    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:21.850819    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:21.868465    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:21.868475    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:21.909972    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:21.909987    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:21.927964    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:21.927977    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:21.932528    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:21.932537    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:21.973308    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:21.973320    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:21.989322    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:21.989335    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:22.003301    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:22.003312    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:22.019473    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:22.019485    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:22.032460    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:22.032474    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:24.553257    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:29.555369    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:29.555637    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:29.583021    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:29.583139    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:29.600552    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:29.600644    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:29.613826    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:29.613901    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:29.629170    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:29.629240    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:29.639691    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:29.639767    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:29.651940    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:29.652010    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:29.661660    4989 logs.go:276] 0 containers: []
	W0816 10:35:29.661669    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:29.661728    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:29.672818    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:29.672835    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:29.672840    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:29.689853    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:29.689865    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:29.712665    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:29.712673    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:29.724310    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:29.724320    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:29.736256    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:29.736267    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:29.769333    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:29.769342    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:29.785612    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:29.785623    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:29.799139    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:29.799149    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:29.805529    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:29.805538    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:29.840561    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:29.840574    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:29.855619    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:29.855633    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:29.867611    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:29.867627    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:29.878890    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:29.878900    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:29.893123    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:29.893135    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:29.907345    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:29.907356    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:29.920015    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:29.920029    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:29.962891    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:29.962904    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:32.479417    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:37.481575    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:37.481687    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:37.492702    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:37.492780    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:37.503002    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:37.503073    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:37.514121    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:37.514192    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:37.524528    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:37.524607    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:37.534881    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:37.534948    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:37.546053    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:37.546121    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:37.557015    4989 logs.go:276] 0 containers: []
	W0816 10:35:37.557026    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:37.557081    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:37.567430    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:37.567448    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:37.567454    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:37.590070    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:37.590082    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:37.601342    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:37.601352    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:37.641609    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:37.641621    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:37.646218    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:37.646225    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:37.659652    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:37.659670    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:37.673904    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:37.673914    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:37.685468    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:37.685477    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:37.696874    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:37.696884    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:37.731374    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:37.731385    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:37.745810    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:37.745821    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:37.761509    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:37.761519    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:37.774415    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:37.774429    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:37.788424    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:37.788437    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:37.799736    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:37.799748    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:37.835168    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:37.835181    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:37.859563    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:37.859572    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:40.373512    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:45.375700    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:45.375846    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:45.387556    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:45.387635    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:45.402173    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:45.402242    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:45.412214    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:45.412392    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:45.424666    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:45.424733    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:45.434996    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:45.435063    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:45.446012    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:45.446076    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:45.456580    4989 logs.go:276] 0 containers: []
	W0816 10:35:45.456594    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:45.456650    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:45.470789    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:45.470807    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:45.470813    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:45.485078    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:45.485090    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:45.500171    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:45.500181    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:45.512802    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:45.512815    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:45.529676    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:45.529690    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:45.534156    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:45.534163    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:45.566977    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:45.566990    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:45.578263    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:45.578276    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:45.592603    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:45.592617    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:45.604163    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:45.604176    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:45.644445    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:45.644455    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:45.680386    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:45.680396    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:45.702928    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:45.702938    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:45.717277    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:45.717288    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:45.737758    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:45.737769    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:45.748894    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:45.748907    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:45.762722    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:45.762735    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:48.277384    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:53.279623    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:53.279719    4989 kubeadm.go:597] duration metric: took 4m4.410973666s to restartPrimaryControlPlane
	W0816 10:35:53.279785    4989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 10:35:53.279811    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 10:35:54.266982    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:35:54.273273    4989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:35:54.276307    4989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:35:54.279338    4989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:35:54.279345    4989 kubeadm.go:157] found existing configuration files:
	
	I0816 10:35:54.279369    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf
	I0816 10:35:54.282281    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:35:54.282311    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:35:54.284821    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf
	I0816 10:35:54.287603    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:35:54.287630    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:35:54.290497    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf
	I0816 10:35:54.292742    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:35:54.292764    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:35:54.295262    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf
	I0816 10:35:54.297749    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:35:54.297772    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:35:54.300077    4989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:35:54.316738    4989 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 10:35:54.316771    4989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:35:54.368069    4989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:35:54.368264    4989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:35:54.368365    4989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 10:35:54.421762    4989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:35:54.429910    4989 out.go:235]   - Generating certificates and keys ...
	I0816 10:35:54.429943    4989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:35:54.429972    4989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:35:54.430028    4989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 10:35:54.430135    4989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 10:35:54.430173    4989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 10:35:54.430273    4989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 10:35:54.430398    4989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 10:35:54.430482    4989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 10:35:54.430529    4989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 10:35:54.430579    4989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 10:35:54.430599    4989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 10:35:54.430649    4989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:35:54.449244    4989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:35:54.532985    4989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:35:54.600376    4989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:35:54.750472    4989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:35:54.777983    4989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:35:54.778376    4989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:35:54.778418    4989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:35:54.865559    4989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:35:54.869748    4989 out.go:235]   - Booting up control plane ...
	I0816 10:35:54.869828    4989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:35:54.869904    4989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:35:54.869945    4989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:35:54.876813    4989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:35:54.877778    4989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 10:35:59.380100    4989 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502247 seconds
	I0816 10:35:59.380161    4989 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:35:59.383976    4989 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:35:59.914866    4989 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:35:59.915383    4989 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-260000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:36:00.419896    4989 kubeadm.go:310] [bootstrap-token] Using token: ikrzsf.2vzddhz1mwsv220r
	I0816 10:36:00.426134    4989 out.go:235]   - Configuring RBAC rules ...
	I0816 10:36:00.426202    4989 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:36:00.426264    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:36:00.435628    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:36:00.436773    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:36:00.437556    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:36:00.438379    4989 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:36:00.441395    4989 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:36:00.618578    4989 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:36:00.832982    4989 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:36:00.833367    4989 kubeadm.go:310] 
	I0816 10:36:00.833399    4989 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:36:00.833405    4989 kubeadm.go:310] 
	I0816 10:36:00.833449    4989 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:36:00.833471    4989 kubeadm.go:310] 
	I0816 10:36:00.833483    4989 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:36:00.833562    4989 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:36:00.833592    4989 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:36:00.833595    4989 kubeadm.go:310] 
	I0816 10:36:00.833655    4989 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:36:00.833659    4989 kubeadm.go:310] 
	I0816 10:36:00.833679    4989 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:36:00.833683    4989 kubeadm.go:310] 
	I0816 10:36:00.833742    4989 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:36:00.833774    4989 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:36:00.833860    4989 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:36:00.833865    4989 kubeadm.go:310] 
	I0816 10:36:00.833918    4989 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:36:00.833954    4989 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:36:00.833956    4989 kubeadm.go:310] 
	I0816 10:36:00.834001    4989 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ikrzsf.2vzddhz1mwsv220r \
	I0816 10:36:00.834068    4989 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 \
	I0816 10:36:00.834079    4989 kubeadm.go:310] 	--control-plane 
	I0816 10:36:00.834081    4989 kubeadm.go:310] 
	I0816 10:36:00.834122    4989 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:36:00.834125    4989 kubeadm.go:310] 
	I0816 10:36:00.834163    4989 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ikrzsf.2vzddhz1mwsv220r \
	I0816 10:36:00.834221    4989 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 
	I0816 10:36:00.834282    4989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:36:00.834289    4989 cni.go:84] Creating CNI manager for ""
	I0816 10:36:00.834298    4989 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:36:00.838601    4989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 10:36:00.846654    4989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 10:36:00.849562    4989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 10:36:00.854470    4989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:36:00.854536    4989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-260000 minikube.k8s.io/updated_at=2024_08_16T10_36_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=running-upgrade-260000 minikube.k8s.io/primary=true
	I0816 10:36:00.854555    4989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:36:00.858786    4989 ops.go:34] apiserver oom_adj: -16
	I0816 10:36:00.909334    4989 kubeadm.go:1113] duration metric: took 54.833625ms to wait for elevateKubeSystemPrivileges
	I0816 10:36:00.909453    4989 kubeadm.go:394] duration metric: took 4m12.060091791s to StartCluster
	I0816 10:36:00.909465    4989 settings.go:142] acquiring lock: {Name:mkd2048b6677d6c95a407663b8dc541f5fa54e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:36:00.909548    4989 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:36:00.909928    4989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/kubeconfig: {Name:mk2e4f2b039616ddb85ed20d74e703a928518229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:36:00.910123    4989 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:36:00.910133    4989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:36:00.910221    4989 config.go:182] Loaded profile config "running-upgrade-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:36:00.910174    4989 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-260000"
	I0816 10:36:00.910221    4989 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-260000"
	I0816 10:36:00.910247    4989 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-260000"
	I0816 10:36:00.910250    4989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-260000"
	W0816 10:36:00.910253    4989 addons.go:243] addon storage-provisioner should already be in state true
	I0816 10:36:00.910280    4989 host.go:66] Checking if "running-upgrade-260000" exists ...
	I0816 10:36:00.911170    4989 kapi.go:59] client config for running-upgrade-260000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106681610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:36:00.911291    4989 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-260000"
	W0816 10:36:00.911295    4989 addons.go:243] addon default-storageclass should already be in state true
	I0816 10:36:00.911303    4989 host.go:66] Checking if "running-upgrade-260000" exists ...
	I0816 10:36:00.914572    4989 out.go:177] * Verifying Kubernetes components...
	I0816 10:36:00.914915    4989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:36:00.917991    4989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:36:00.917997    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	I0816 10:36:00.920549    4989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:36:00.924599    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:36:00.928629    4989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:36:00.928635    4989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:36:00.928641    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	I0816 10:36:01.012151    4989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:36:01.017232    4989 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:36:01.017271    4989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:36:01.022016    4989 api_server.go:72] duration metric: took 111.884166ms to wait for apiserver process to appear ...
	I0816 10:36:01.022024    4989 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:36:01.022032    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:01.035158    4989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:36:01.093150    4989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:36:01.370097    4989 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:36:01.370109    4989 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:36:06.022450    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:06.022532    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:11.024054    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:11.024114    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:16.024423    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:16.024476    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:21.024931    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:21.024989    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:26.025622    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:26.025645    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:31.026320    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:31.026370    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 10:36:31.371806    4989 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 10:36:31.374994    4989 out.go:177] * Enabled addons: storage-provisioner
	I0816 10:36:31.382939    4989 addons.go:510] duration metric: took 30.473455667s for enable addons: enabled=[storage-provisioner]
	I0816 10:36:36.027262    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:36.027298    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:41.028468    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:41.028513    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:46.028793    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:46.028880    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:51.029428    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:51.029456    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:56.029760    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:56.029824    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:01.031920    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:01.032045    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:01.054646    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:01.054718    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:01.065066    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:01.065131    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:01.075503    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:01.075568    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:01.085529    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:01.085589    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:01.096351    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:01.096422    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:01.106772    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:01.106837    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:01.117132    4989 logs.go:276] 0 containers: []
	W0816 10:37:01.117143    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:01.117193    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:01.127273    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:01.127286    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:01.127292    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:01.138565    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:01.138575    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:01.162447    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:01.162454    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:01.173588    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:01.173600    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:01.211094    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:01.211107    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:01.222821    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:01.222831    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:01.237719    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:01.237729    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:01.251235    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:01.251244    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:01.262352    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:01.262364    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:01.273821    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:01.273830    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:01.288827    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:01.288838    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:01.306715    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:01.306725    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:01.342610    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:01.342617    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:03.849379    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:08.851639    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:08.851770    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:08.863766    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:08.863860    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:08.874412    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:08.874485    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:08.884688    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:08.884771    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:08.895211    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:08.895274    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:08.905678    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:08.905750    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:08.916350    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:08.916428    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:08.926495    4989 logs.go:276] 0 containers: []
	W0816 10:37:08.926507    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:08.926573    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:08.937016    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:08.937030    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:08.937036    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:08.948494    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:08.948505    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:08.960149    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:08.960160    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:08.977274    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:08.977284    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:08.991899    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:08.991910    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:09.026629    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:09.026639    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:09.031217    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:09.031224    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:09.071123    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:09.071134    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:09.087635    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:09.087646    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:09.100145    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:09.100157    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:09.115986    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:09.115996    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:09.128469    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:09.128482    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:09.144029    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:09.144040    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:11.669805    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:16.672494    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:16.672939    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:16.725383    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:16.725507    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:16.748210    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:16.748293    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:16.766127    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:16.766197    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:16.778992    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:16.779063    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:16.789888    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:16.789967    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:16.800981    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:16.801050    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:16.811508    4989 logs.go:276] 0 containers: []
	W0816 10:37:16.811518    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:16.811572    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:16.822453    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:16.822469    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:16.822477    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:16.886989    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:16.887008    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:16.902384    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:16.902396    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:16.920487    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:16.920500    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:16.933384    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:16.933396    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:16.956853    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:16.956862    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:16.968092    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:16.968108    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:17.000724    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:17.000732    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:17.004801    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:17.004809    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:17.016475    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:17.016490    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:17.028220    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:17.028231    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:17.043081    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:17.043093    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:17.054740    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:17.054750    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:19.574757    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:24.577142    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:24.577488    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:24.619366    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:24.619502    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:24.640990    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:24.641089    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:24.655958    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:24.656034    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:24.668310    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:24.668373    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:24.679386    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:24.679463    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:24.690605    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:24.690666    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:24.702192    4989 logs.go:276] 0 containers: []
	W0816 10:37:24.702207    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:24.702259    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:24.712613    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:24.712629    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:24.712635    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:24.754233    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:24.754245    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:24.769393    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:24.769403    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:24.789085    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:24.789099    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:24.800888    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:24.800899    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:24.818233    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:24.818245    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:24.822790    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:24.822800    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:24.836939    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:24.836952    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:24.848374    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:24.848387    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:24.867821    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:24.867833    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:24.879459    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:24.879471    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:24.905182    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:24.905194    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:24.917217    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:24.917229    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:27.454330    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:32.456213    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:32.456451    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:32.476734    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:32.476832    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:32.491298    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:32.491379    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:32.503264    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:32.503326    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:32.513924    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:32.513985    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:32.524313    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:32.524383    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:32.534563    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:32.534633    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:32.545000    4989 logs.go:276] 0 containers: []
	W0816 10:37:32.545012    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:32.545068    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:32.555589    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:32.555605    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:32.555611    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:32.624771    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:32.624785    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:32.645331    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:32.645343    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:32.656768    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:32.656778    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:32.668191    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:32.668203    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:32.679944    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:32.679956    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:32.699680    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:32.699693    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:32.733189    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:32.733198    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:32.738582    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:32.738592    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:32.752516    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:32.752529    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:32.767370    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:32.767383    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:32.782162    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:32.782172    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:32.805539    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:32.805550    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:35.319149    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:40.321530    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:40.321856    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:40.362804    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:40.362944    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:40.385369    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:40.385484    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:40.400504    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:40.400588    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:40.413020    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:40.413086    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:40.424330    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:40.424404    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:40.435258    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:40.435331    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:40.447114    4989 logs.go:276] 0 containers: []
	W0816 10:37:40.447128    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:40.447191    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:40.457616    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:40.457631    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:40.457635    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:40.475019    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:40.475028    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:40.486980    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:40.486990    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:40.521478    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:40.521487    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:40.526272    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:40.526281    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:40.537973    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:40.537983    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:40.557876    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:40.557890    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:40.572520    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:40.572529    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:40.597121    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:40.597132    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:40.608472    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:40.608482    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:40.644244    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:40.644255    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:40.658708    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:40.658720    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:40.672278    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:40.672290    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:43.191155    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:48.192307    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:48.192536    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:48.214109    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:48.214199    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:48.228919    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:48.228990    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:48.240975    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:48.241041    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:48.256511    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:48.256583    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:48.266553    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:48.266620    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:48.280831    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:48.280897    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:48.290724    4989 logs.go:276] 0 containers: []
	W0816 10:37:48.290734    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:48.290794    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:48.301156    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:48.301172    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:48.301177    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:48.334676    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:48.334685    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:48.371039    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:48.371051    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:48.385224    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:48.385237    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:48.397014    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:48.397025    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:48.409429    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:48.409440    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:48.420623    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:48.420633    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:48.444595    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:48.444605    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:48.449229    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:48.449238    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:48.466944    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:48.466957    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:48.478650    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:48.478662    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:48.503555    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:48.503565    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:48.524214    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:48.524224    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:51.041883    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:56.041903    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:56.042149    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:56.063797    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:56.063895    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:56.081651    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:56.081722    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:56.093922    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:56.093998    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:56.104911    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:56.104977    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:56.115582    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:56.115649    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:56.125832    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:56.125896    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:56.135955    4989 logs.go:276] 0 containers: []
	W0816 10:37:56.135965    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:56.136021    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:56.146149    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:56.146163    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:56.146167    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:56.157601    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:56.157612    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:56.169612    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:56.169625    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:56.187233    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:56.187241    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:56.202130    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:56.202142    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:56.236975    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:56.236984    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:56.250985    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:56.250996    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:56.264758    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:56.264769    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:56.280642    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:56.280655    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:56.292253    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:56.292265    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:56.315723    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:56.315731    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:56.327087    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:56.327100    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:56.331599    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:56.331609    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:58.869556    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:03.870392    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:03.870553    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:03.884655    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:03.884736    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:03.897907    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:03.897977    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:03.910499    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:38:03.910569    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:03.920595    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:03.920658    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:03.931101    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:03.931174    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:03.941606    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:03.941668    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:03.951841    4989 logs.go:276] 0 containers: []
	W0816 10:38:03.951855    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:03.951913    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:03.962441    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:03.962456    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:03.962460    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:03.997164    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:03.997171    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:04.032867    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:04.032879    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:04.047427    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:04.047440    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:04.059896    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:04.059907    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:04.075139    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:04.075148    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:04.087074    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:04.087085    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:04.111896    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:04.111904    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:04.116437    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:04.116442    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:04.132131    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:04.132141    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:04.144242    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:04.144252    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:04.156459    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:04.156471    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:04.174452    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:04.174465    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:06.685901    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:11.687330    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:11.687591    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:11.710841    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:11.710952    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:11.730464    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:11.730550    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:11.742776    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:38:11.742848    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:11.753760    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:11.753822    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:11.769157    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:11.769227    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:11.779778    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:11.779843    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:11.793013    4989 logs.go:276] 0 containers: []
	W0816 10:38:11.793024    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:11.793089    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:11.803221    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:11.803238    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:11.803243    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:11.814964    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:11.814977    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:11.832757    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:11.832770    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:11.844405    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:11.844415    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:11.869662    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:11.869672    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:11.883927    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:11.883940    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:11.895520    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:11.895530    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:11.913711    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:11.913721    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:11.928105    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:11.928118    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:11.939926    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:11.939940    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:11.951159    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:11.951173    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:11.986091    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:11.986100    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:11.990929    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:11.990934    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:14.545428    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:19.547054    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:19.547256    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:19.566705    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:19.566794    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:19.581419    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:19.581502    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:19.593524    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:19.593603    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:19.607002    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:19.607076    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:19.621570    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:19.621632    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:19.631799    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:19.631864    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:19.641745    4989 logs.go:276] 0 containers: []
	W0816 10:38:19.641756    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:19.641810    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:19.652639    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:19.652656    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:19.652661    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:19.689429    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:19.689442    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:19.706525    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:19.706536    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:19.719740    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:19.719750    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:19.735497    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:19.735510    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:19.746686    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:19.746701    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:19.781849    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:19.781858    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:19.795701    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:19.795713    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:19.807172    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:19.807182    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:19.818904    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:19.818913    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:19.842479    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:19.842487    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:19.854004    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:19.854015    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:19.858497    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:19.858506    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:19.872395    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:19.872404    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:19.884472    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:19.884482    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:22.404353    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:27.405081    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:27.405425    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:27.435770    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:27.435897    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:27.454552    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:27.454643    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:27.468553    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:27.468625    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:27.480176    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:27.480248    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:27.490759    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:27.490831    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:27.501478    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:27.501544    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:27.512017    4989 logs.go:276] 0 containers: []
	W0816 10:38:27.512028    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:27.512087    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:27.522706    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:27.522721    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:27.522727    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:27.537036    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:27.537048    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:27.549143    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:27.549158    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:27.583778    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:27.583786    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:27.595854    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:27.595867    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:27.622101    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:27.622115    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:27.640380    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:27.640392    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:27.645566    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:27.645573    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:27.680474    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:27.680485    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:27.699026    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:27.699036    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:27.721271    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:27.721281    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:27.733120    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:27.733132    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:27.746925    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:27.746936    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:27.757982    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:27.757991    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:27.773386    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:27.773398    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:30.286606    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:35.288876    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:35.289197    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:35.324527    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:35.324629    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:35.347363    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:35.347444    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:35.361400    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:35.361475    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:35.373279    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:35.373346    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:35.384450    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:35.384514    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:35.395385    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:35.395457    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:35.405918    4989 logs.go:276] 0 containers: []
	W0816 10:38:35.405930    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:35.405986    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:35.416813    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:35.416831    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:35.416836    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:35.452831    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:35.452843    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:35.467807    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:35.467816    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:35.482137    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:35.482147    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:35.494013    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:35.494025    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:35.498518    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:35.498528    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:35.510185    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:35.510197    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:35.522119    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:35.522132    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:35.542473    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:35.542482    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:35.554598    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:35.554608    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:35.566645    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:35.566659    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:35.578547    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:35.578562    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:35.596414    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:35.596424    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:35.608241    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:35.608250    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:35.643530    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:35.643538    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:38.171280    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:43.173432    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:43.173895    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:43.226344    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:43.226472    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:43.243592    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:43.243672    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:43.256858    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:43.256935    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:43.267999    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:43.268071    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:43.281716    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:43.281793    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:43.292827    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:43.292897    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:43.303560    4989 logs.go:276] 0 containers: []
	W0816 10:38:43.303573    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:43.303630    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:43.314704    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:43.314723    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:43.314729    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:43.319300    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:43.319308    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:43.331411    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:43.331423    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:43.346653    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:43.346666    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:43.364313    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:43.364322    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:43.379112    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:43.379125    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:43.414136    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:43.414153    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:43.426793    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:43.426807    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:43.451251    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:43.451258    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:43.463108    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:43.463119    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:43.501388    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:43.501402    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:43.515489    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:43.515498    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:43.527092    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:43.527103    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:43.539195    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:43.539208    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:43.553700    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:43.553713    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:46.066900    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:51.067215    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:51.067444    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:51.102398    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:51.102518    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:51.119518    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:51.119597    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:51.132818    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:51.132894    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:51.144188    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:51.144263    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:51.154355    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:51.154423    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:51.165005    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:51.165069    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:51.175406    4989 logs.go:276] 0 containers: []
	W0816 10:38:51.175419    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:51.175480    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:51.185657    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:51.185674    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:51.185680    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:51.219782    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:51.219793    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:51.231515    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:51.231524    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:51.250826    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:51.250835    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:51.275632    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:51.275644    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:51.287423    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:51.287435    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:51.301742    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:51.301752    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:51.313298    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:51.313308    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:51.331016    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:51.331026    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:51.366213    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:51.366222    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:51.379889    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:51.379899    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:51.391664    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:51.391679    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:51.396525    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:51.396531    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:51.407819    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:51.407833    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:51.419440    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:51.419451    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:53.933655    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:58.935834    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:58.935974    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:58.953337    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:58.953425    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:58.964771    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:58.964854    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:58.976408    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:58.976484    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:58.987167    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:58.987250    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:58.998162    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:58.998230    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:59.008303    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:59.008374    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:59.018269    4989 logs.go:276] 0 containers: []
	W0816 10:38:59.018280    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:59.018345    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:59.028620    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:59.028638    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:59.028644    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:59.063455    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:59.063467    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:59.085887    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:59.085899    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:59.097881    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:59.097895    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:59.102266    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:59.102276    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:59.117374    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:59.117388    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:59.135918    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:59.135929    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:59.148710    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:59.148724    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:59.160363    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:59.160376    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:59.195440    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:59.195452    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:59.211544    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:59.211556    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:59.229073    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:59.229086    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:59.244287    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:59.244297    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:59.256068    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:59.256078    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:59.278445    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:59.278454    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:01.804923    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:06.807222    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:06.807447    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:06.826002    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:06.826088    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:06.839542    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:06.839616    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:06.851361    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:06.851429    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:06.863101    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:06.863162    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:06.874168    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:06.874241    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:06.884876    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:06.884945    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:06.897064    4989 logs.go:276] 0 containers: []
	W0816 10:39:06.897074    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:06.897131    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:06.907277    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:06.907294    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:06.907300    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:06.921027    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:06.921038    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:06.932839    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:06.932850    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:06.944158    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:06.944169    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:06.961915    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:06.961924    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:06.998596    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:06.998609    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:07.009907    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:07.009918    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:07.021646    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:07.021657    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:07.038623    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:07.038633    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:07.071830    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:07.071841    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:07.076417    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:07.076423    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:07.090871    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:07.090881    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:07.102674    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:07.102683    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:07.117296    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:07.117305    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:07.129173    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:07.129189    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:09.655215    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:14.657447    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:14.657709    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:14.687842    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:14.687956    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:14.705559    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:14.705647    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:14.719525    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:14.719603    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:14.738468    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:14.738537    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:14.748846    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:14.748919    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:14.759718    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:14.759782    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:14.774252    4989 logs.go:276] 0 containers: []
	W0816 10:39:14.774263    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:14.774323    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:14.784621    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:14.784640    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:14.784647    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:14.798197    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:14.798206    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:14.809846    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:14.809860    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:14.824771    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:14.824783    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:14.836870    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:14.836882    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:14.854395    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:14.854409    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:14.878325    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:14.878334    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:14.914256    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:14.914268    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:14.927330    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:14.927345    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:14.969173    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:14.969191    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:14.973809    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:14.973815    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:14.987382    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:14.987395    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:14.998878    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:14.998888    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:15.010308    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:15.010318    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:15.022812    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:15.022825    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:17.536231    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:22.538499    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:22.538604    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:22.549367    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:22.549441    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:22.559544    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:22.559614    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:22.570568    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:22.570643    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:22.581495    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:22.581556    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:22.592408    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:22.592479    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:22.604497    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:22.604574    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:22.616158    4989 logs.go:276] 0 containers: []
	W0816 10:39:22.616170    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:22.616234    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:22.627313    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:22.627332    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:22.627338    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:22.640266    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:22.640280    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:22.655338    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:22.655348    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:22.672817    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:22.672828    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:22.685204    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:22.685217    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:22.701558    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:22.701568    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:22.742072    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:22.742083    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:22.753587    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:22.753596    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:22.764885    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:22.764897    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:22.776514    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:22.776526    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:22.788768    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:22.788779    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:22.793035    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:22.793043    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:22.807217    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:22.807228    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:22.818782    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:22.818792    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:22.842013    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:22.842020    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:25.376556    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:30.378624    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:30.378802    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:30.390926    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:30.391001    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:30.402372    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:30.402450    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:30.414201    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:30.414278    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:30.424785    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:30.424856    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:30.434896    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:30.434959    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:30.445742    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:30.445807    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:30.455374    4989 logs.go:276] 0 containers: []
	W0816 10:39:30.455386    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:30.455440    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:30.466722    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:30.466739    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:30.466745    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:30.473805    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:30.473812    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:30.510295    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:30.510307    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:30.522956    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:30.522965    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:30.537198    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:30.537207    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:30.570831    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:30.570841    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:30.588928    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:30.588939    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:30.601489    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:30.601501    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:30.616287    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:30.616300    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:30.639916    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:30.639929    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:30.651300    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:30.651313    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:30.663045    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:30.663056    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:30.674934    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:30.674946    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:30.691675    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:30.691688    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:30.709790    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:30.709800    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:33.235121    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:38.235965    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:38.236063    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:38.246921    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:38.246982    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:38.257641    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:38.257715    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:38.268819    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:38.268894    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:38.279719    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:38.279789    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:38.290502    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:38.290566    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:38.301751    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:38.301817    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:38.311955    4989 logs.go:276] 0 containers: []
	W0816 10:39:38.311968    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:38.312022    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:38.323122    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:38.323139    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:38.323147    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:38.335232    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:38.335243    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:38.349653    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:38.349664    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:38.383838    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:38.383846    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:38.388552    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:38.388557    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:38.402559    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:38.402568    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:38.417174    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:38.417184    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:38.428587    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:38.428600    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:38.440903    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:38.440913    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:38.476300    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:38.476310    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:38.488456    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:38.488466    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:38.513426    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:38.513438    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:38.525680    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:38.525693    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:38.537879    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:38.537893    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:38.553694    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:38.553705    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:41.073572    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:46.075688    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:46.075886    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:46.096186    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:46.096276    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:46.110928    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:46.110999    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:46.126615    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:46.126680    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:46.138063    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:46.138131    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:46.148104    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:46.148167    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:46.158747    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:46.158811    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:46.168765    4989 logs.go:276] 0 containers: []
	W0816 10:39:46.168778    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:46.168829    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:46.179183    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:46.179201    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:46.179205    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:46.202194    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:46.202203    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:46.206238    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:46.206247    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:46.244817    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:46.244833    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:46.257301    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:46.257313    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:46.272679    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:46.272691    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:46.284072    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:46.284083    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:46.299408    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:46.299421    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:46.311457    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:46.311471    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:46.346671    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:46.346684    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:46.366428    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:46.366455    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:46.377370    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:46.377378    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:46.392119    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:46.392133    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:46.419240    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:46.419251    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:46.433688    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:46.433702    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:48.947476    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:53.949731    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:53.950119    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:53.992042    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:53.992177    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:54.013626    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:54.013753    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:54.029864    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:54.029945    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:54.041825    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:54.041896    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:54.052368    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:54.052429    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:54.063455    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:54.063520    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:54.073795    4989 logs.go:276] 0 containers: []
	W0816 10:39:54.073810    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:54.073863    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:54.084098    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:54.084117    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:54.084122    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:54.099033    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:54.099045    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:54.110838    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:54.110850    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:54.127022    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:54.127033    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:54.139519    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:54.139533    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:54.154196    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:54.154210    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:54.188246    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:54.188253    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:54.202270    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:54.202279    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:54.214189    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:54.214199    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:54.218822    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:54.218829    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:54.230123    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:54.230133    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:54.253016    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:54.253028    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:54.278025    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:54.278036    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:54.289918    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:54.289932    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:54.325796    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:54.325809    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:56.838675    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:01.840854    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:01.845250    4989 out.go:201] 
	W0816 10:40:01.851242    4989 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0816 10:40:01.851250    4989 out.go:270] * 
	* 
	W0816 10:40:01.851711    4989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:40:01.867155    4989 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-260000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-16 10:40:01.957343 -0700 PDT m=+3151.224626501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-260000 -n running-upgrade-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-260000 -n running-upgrade-260000: exit status 2 (15.64819225s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-260000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-588000          | force-systemd-flag-588000 | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-552000              | force-systemd-env-552000  | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-552000           | force-systemd-env-552000  | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT | 16 Aug 24 10:30 PDT |
	| start   | -p docker-flags-735000                | docker-flags-735000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-588000             | force-systemd-flag-588000 | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-588000          | force-systemd-flag-588000 | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT | 16 Aug 24 10:30 PDT |
	| start   | -p cert-expiration-105000             | cert-expiration-105000    | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-735000 ssh               | docker-flags-735000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-735000 ssh               | docker-flags-735000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-735000                | docker-flags-735000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT | 16 Aug 24 10:30 PDT |
	| start   | -p cert-options-000000                | cert-options-000000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-000000 ssh               | cert-options-000000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-000000 -- sudo        | cert-options-000000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-000000                | cert-options-000000       | jenkins | v1.33.1 | 16 Aug 24 10:30 PDT | 16 Aug 24 10:30 PDT |
	| start   | -p running-upgrade-260000             | minikube                  | jenkins | v1.26.0 | 16 Aug 24 10:30 PDT | 16 Aug 24 10:31 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-260000             | running-upgrade-260000    | jenkins | v1.33.1 | 16 Aug 24 10:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-105000             | cert-expiration-105000    | jenkins | v1.33.1 | 16 Aug 24 10:33 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-105000             | cert-expiration-105000    | jenkins | v1.33.1 | 16 Aug 24 10:33 PDT | 16 Aug 24 10:33 PDT |
	| start   | -p kubernetes-upgrade-629000          | kubernetes-upgrade-629000 | jenkins | v1.33.1 | 16 Aug 24 10:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-629000          | kubernetes-upgrade-629000 | jenkins | v1.33.1 | 16 Aug 24 10:33 PDT | 16 Aug 24 10:33 PDT |
	| start   | -p kubernetes-upgrade-629000          | kubernetes-upgrade-629000 | jenkins | v1.33.1 | 16 Aug 24 10:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-629000          | kubernetes-upgrade-629000 | jenkins | v1.33.1 | 16 Aug 24 10:33 PDT | 16 Aug 24 10:33 PDT |
	| start   | -p stopped-upgrade-403000             | minikube                  | jenkins | v1.26.0 | 16 Aug 24 10:33 PDT | 16 Aug 24 10:34 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-403000 stop           | minikube                  | jenkins | v1.26.0 | 16 Aug 24 10:34 PDT | 16 Aug 24 10:34 PDT |
	| start   | -p stopped-upgrade-403000             | stopped-upgrade-403000    | jenkins | v1.33.1 | 16 Aug 24 10:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 10:34:47
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 10:34:47.945763    5136 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:34:47.945908    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:34:47.945912    5136 out.go:358] Setting ErrFile to fd 2...
	I0816 10:34:47.945915    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:34:47.946045    5136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:34:47.947176    5136 out.go:352] Setting JSON to false
	I0816 10:34:47.964781    5136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3850,"bootTime":1723825837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:34:47.964853    5136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:34:47.969694    5136 out.go:177] * [stopped-upgrade-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:34:47.977619    5136 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:34:47.977656    5136 notify.go:220] Checking for updates...
	I0816 10:34:47.984627    5136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:34:47.987576    5136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:34:47.990624    5136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:34:47.993639    5136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:34:47.996762    5136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:34:47.999877    5136 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:34:48.003631    5136 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 10:34:48.006584    5136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:34:48.010602    5136 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:34:48.016619    5136 start.go:297] selected driver: qemu2
	I0816 10:34:48.016626    5136 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:34:48.016694    5136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:34:48.019222    5136 cni.go:84] Creating CNI manager for ""
	I0816 10:34:48.019242    5136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:34:48.019264    5136 start.go:340] cluster config:
	{Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:34:48.019338    5136 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:34:48.026632    5136 out.go:177] * Starting "stopped-upgrade-403000" primary control-plane node in "stopped-upgrade-403000" cluster
	I0816 10:34:48.030631    5136 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 10:34:48.030647    5136 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0816 10:34:48.030657    5136 cache.go:56] Caching tarball of preloaded images
	I0816 10:34:48.030730    5136 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:34:48.030736    5136 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0816 10:34:48.030798    5136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/config.json ...
	I0816 10:34:48.031236    5136 start.go:360] acquireMachinesLock for stopped-upgrade-403000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:34:48.031269    5136 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "stopped-upgrade-403000"
	I0816 10:34:48.031278    5136 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:34:48.031284    5136 fix.go:54] fixHost starting: 
	I0816 10:34:48.031388    5136 fix.go:112] recreateIfNeeded on stopped-upgrade-403000: state=Stopped err=<nil>
	W0816 10:34:48.031396    5136 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:34:48.035719    5136 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-403000" ...
	I0816 10:34:44.927215    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:48.043619    5136 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:34:48.043687    5136 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50463-:22,hostfwd=tcp::50464-:2376,hostname=stopped-upgrade-403000 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/disk.qcow2
	I0816 10:34:48.090633    5136 main.go:141] libmachine: STDOUT: 
	I0816 10:34:48.090668    5136 main.go:141] libmachine: STDERR: 
	I0816 10:34:48.090673    5136 main.go:141] libmachine: Waiting for VM to start (ssh -p 50463 docker@127.0.0.1)...
	I0816 10:34:49.929385    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:49.929484    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:49.939980    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:49.940058    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:49.953162    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:49.953233    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:49.963537    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:49.963608    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:49.973853    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:49.973927    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:49.985202    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:49.985276    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:49.997725    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:49.997794    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:50.008058    4989 logs.go:276] 0 containers: []
	W0816 10:34:50.008070    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:50.008129    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:50.018260    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:50.018278    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:50.018284    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:50.051918    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:50.051931    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:50.065521    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:50.065534    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:50.077440    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:50.077451    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:50.094699    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:50.094708    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:50.109129    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:50.109141    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:50.113709    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:50.113718    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:50.128760    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:50.128772    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:50.144182    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:50.144191    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:50.168358    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:50.168365    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:50.180281    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:50.180294    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:50.194454    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:50.194464    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:50.205191    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:50.205201    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:50.218873    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:50.218885    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:50.230926    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:50.230936    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:34:50.244593    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:50.244601    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:50.286615    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:50.286624    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:52.823480    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:34:57.826112    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:34:57.826269    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:34:57.838102    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:34:57.838181    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:34:57.849109    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:34:57.849183    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:34:57.863285    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:34:57.863355    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:34:57.874088    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:34:57.874160    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:34:57.884938    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:34:57.885009    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:34:57.896472    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:34:57.896536    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:34:57.907477    4989 logs.go:276] 0 containers: []
	W0816 10:34:57.907490    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:34:57.907550    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:34:57.918427    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:34:57.918444    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:34:57.918452    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:34:57.936493    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:34:57.936504    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:34:57.948274    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:34:57.948288    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:34:57.968960    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:34:57.968972    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:34:57.980722    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:34:57.980736    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:34:58.005146    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:34:58.005154    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:34:58.040850    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:34:58.040861    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:34:58.055618    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:34:58.055628    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:34:58.068854    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:34:58.068868    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:34:58.083581    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:34:58.083595    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:34:58.088121    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:34:58.088130    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:34:58.123947    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:34:58.123957    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:34:58.135886    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:34:58.135896    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:34:58.147151    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:34:58.147164    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:34:58.159305    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:34:58.159315    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:34:58.202010    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:34:58.202020    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:34:58.216875    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:34:58.216887    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:00.733712    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:05.736541    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:05.736958    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:05.778343    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:05.778445    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:05.797921    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:05.797993    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:05.810570    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:05.810650    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:05.821466    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:05.821536    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:05.833883    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:05.833945    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:05.844109    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:05.844178    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:05.854475    4989 logs.go:276] 0 containers: []
	W0816 10:35:05.854497    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:05.854559    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:05.873464    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:05.873480    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:05.873485    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:05.909586    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:05.909600    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:05.926072    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:05.926083    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:05.943888    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:05.943899    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:05.967864    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:05.967872    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:05.981875    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:05.981886    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:06.002423    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:06.002433    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:06.017001    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:06.017014    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:06.028346    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:06.028358    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:06.062976    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:06.062989    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:06.074838    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:06.074848    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:06.086395    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:06.086405    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:06.098277    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:06.098287    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:06.110718    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:06.110729    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:06.153039    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:06.153048    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:06.157369    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:06.157378    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:06.170837    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:06.170851    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:08.135616    5136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/config.json ...
	I0816 10:35:08.136486    5136 machine.go:93] provisionDockerMachine start ...
	I0816 10:35:08.136585    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.136940    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.136948    5136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:35:08.227595    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:35:08.227629    5136 buildroot.go:166] provisioning hostname "stopped-upgrade-403000"
	I0816 10:35:08.227744    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.227953    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.227963    5136 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-403000 && echo "stopped-upgrade-403000" | sudo tee /etc/hostname
	I0816 10:35:08.309692    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-403000
	
	I0816 10:35:08.309777    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.309957    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.309972    5136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-403000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-403000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-403000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:35:08.386051    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:35:08.386065    5136 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1189/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1189/.minikube}
	I0816 10:35:08.386081    5136 buildroot.go:174] setting up certificates
	I0816 10:35:08.386096    5136 provision.go:84] configureAuth start
	I0816 10:35:08.386103    5136 provision.go:143] copyHostCerts
	I0816 10:35:08.386189    5136 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem, removing ...
	I0816 10:35:08.386195    5136 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem
	I0816 10:35:08.386305    5136 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem (1082 bytes)
	I0816 10:35:08.386492    5136 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem, removing ...
	I0816 10:35:08.386496    5136 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem
	I0816 10:35:08.386556    5136 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem (1123 bytes)
	I0816 10:35:08.386673    5136 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem, removing ...
	I0816 10:35:08.386676    5136 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem
	I0816 10:35:08.386727    5136 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem (1679 bytes)
	I0816 10:35:08.386823    5136 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-403000 san=[127.0.0.1 localhost minikube stopped-upgrade-403000]
	I0816 10:35:08.473164    5136 provision.go:177] copyRemoteCerts
	I0816 10:35:08.473191    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:35:08.473200    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:35:08.510762    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 10:35:08.517573    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0816 10:35:08.524611    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:35:08.531934    5136 provision.go:87] duration metric: took 145.835417ms to configureAuth
	I0816 10:35:08.531944    5136 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:35:08.532079    5136 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:35:08.532115    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.532203    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.532208    5136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:35:08.602067    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:35:08.602076    5136 buildroot.go:70] root file system type: tmpfs
	I0816 10:35:08.602128    5136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:35:08.602181    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.602297    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.602333    5136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:35:08.674591    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:35:08.674646    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.674765    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.674773    5136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:35:09.058118    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:35:09.058132    5136 machine.go:96] duration metric: took 921.658125ms to provisionDockerMachine
	I0816 10:35:09.058139    5136 start.go:293] postStartSetup for "stopped-upgrade-403000" (driver="qemu2")
	I0816 10:35:09.058146    5136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:35:09.058213    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:35:09.058222    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:35:09.095798    5136 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:35:09.097185    5136 info.go:137] Remote host: Buildroot 2021.02.12
	I0816 10:35:09.097193    5136 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1189/.minikube/addons for local assets ...
	I0816 10:35:09.097275    5136 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1189/.minikube/files for local assets ...
	I0816 10:35:09.097395    5136 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem -> 20542.pem in /etc/ssl/certs
	I0816 10:35:09.097522    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:35:09.100423    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem --> /etc/ssl/certs/20542.pem (1708 bytes)
	I0816 10:35:09.107857    5136 start.go:296] duration metric: took 49.713583ms for postStartSetup
	I0816 10:35:09.107875    5136 fix.go:56] duration metric: took 21.077043583s for fixHost
	I0816 10:35:09.107908    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:09.108011    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:09.108016    5136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:35:09.178400    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829709.337763504
	
	I0816 10:35:09.178407    5136 fix.go:216] guest clock: 1723829709.337763504
	I0816 10:35:09.178411    5136 fix.go:229] Guest: 2024-08-16 10:35:09.337763504 -0700 PDT Remote: 2024-08-16 10:35:09.107877 -0700 PDT m=+21.187146167 (delta=229.886504ms)
	I0816 10:35:09.178422    5136 fix.go:200] guest clock delta is within tolerance: 229.886504ms
	I0816 10:35:09.178425    5136 start.go:83] releasing machines lock for "stopped-upgrade-403000", held for 21.147603417s
	I0816 10:35:09.178482    5136 ssh_runner.go:195] Run: cat /version.json
	I0816 10:35:09.178491    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:35:09.178495    5136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:35:09.178511    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	W0816 10:35:09.179048    5136 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50463: connect: connection refused
	I0816 10:35:09.179073    5136 retry.go:31] will retry after 239.326818ms: dial tcp [::1]:50463: connect: connection refused
	W0816 10:35:09.214211    5136 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0816 10:35:09.214257    5136 ssh_runner.go:195] Run: systemctl --version
	I0816 10:35:09.216074    5136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:35:09.217772    5136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:35:09.217797    5136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0816 10:35:09.220700    5136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0816 10:35:09.225628    5136 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:35:09.225642    5136 start.go:495] detecting cgroup driver to use...
	I0816 10:35:09.225711    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:35:09.232663    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0816 10:35:09.236039    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:35:09.239552    5136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:35:09.239584    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:35:09.242597    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:35:09.245405    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:35:09.248646    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:35:09.252123    5136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:35:09.255615    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:35:09.258509    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:35:09.261279    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:35:09.264427    5136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:35:09.267328    5136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:35:09.269980    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:09.333896    5136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:35:09.342336    5136 start.go:495] detecting cgroup driver to use...
	I0816 10:35:09.342407    5136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:35:09.350021    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:35:09.355402    5136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:35:09.364554    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:35:09.369534    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:35:09.374522    5136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:35:09.426844    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:35:09.432061    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:35:09.437582    5136 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:35:09.439189    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:35:09.441768    5136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0816 10:35:09.446553    5136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:35:09.514887    5136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:35:09.700833    5136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:35:09.700914    5136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:35:09.707176    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:09.783752    5136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:35:10.934982    5136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.151235792s)
	I0816 10:35:10.935052    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:35:10.939554    5136 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:35:10.945892    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:35:10.951005    5136 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:35:11.030883    5136 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:35:11.105527    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:11.186520    5136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:35:11.192444    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:35:11.197419    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:11.273787    5136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:35:11.311150    5136 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:35:11.311224    5136 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:35:11.313979    5136 start.go:563] Will wait 60s for crictl version
	I0816 10:35:11.314037    5136 ssh_runner.go:195] Run: which crictl
	I0816 10:35:11.315471    5136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:35:11.329863    5136 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0816 10:35:11.329929    5136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:35:11.345689    5136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:35:11.367287    5136 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0816 10:35:11.367415    5136 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0816 10:35:11.368748    5136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:35:11.372786    5136 kubeadm.go:883] updating cluster {Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0816 10:35:11.372834    5136 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 10:35:11.372870    5136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:35:11.383598    5136 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:35:11.383606    5136 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 10:35:11.383646    5136 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:35:11.386963    5136 ssh_runner.go:195] Run: which lz4
	I0816 10:35:11.388295    5136 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:35:11.389679    5136 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:35:11.389690    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0816 10:35:12.365896    5136 docker.go:649] duration metric: took 977.648292ms to copy over tarball
	I0816 10:35:12.365953    5136 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:35:08.686931    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:13.534088    5136 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168139s)
	I0816 10:35:13.534103    5136 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:35:13.549376    5136 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:35:13.552531    5136 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0816 10:35:13.557254    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:13.633610    5136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:35:15.513867    5136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.8802815s)
	I0816 10:35:15.513955    5136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:35:15.527255    5136 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:35:15.527264    5136 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 10:35:15.527269    5136 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 10:35:15.531146    5136 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:15.533516    5136 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:15.534462    5136 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:15.534767    5136 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:15.537042    5136 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:15.538816    5136 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 10:35:15.538976    5136 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:15.539371    5136 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:15.540797    5136 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:15.541356    5136 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 10:35:15.541365    5136 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:15.541477    5136 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:15.543063    5136 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:15.543993    5136 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:15.544063    5136 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:15.544937    5136 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.021883    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:16.025589    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:16.025868    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:16.025888    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0816 10:35:16.034285    5136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0816 10:35:16.034314    5136 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:16.034366    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:16.058080    5136 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0816 10:35:16.058105    5136 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:16.058161    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:16.058186    5136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0816 10:35:16.058198    5136 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:16.058225    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:16.058213    5136 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0816 10:35:16.058250    5136 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0816 10:35:16.058271    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0816 10:35:16.061372    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0816 10:35:16.069779    5136 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 10:35:16.069893    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:16.077232    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0816 10:35:16.077279    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 10:35:16.077377    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0816 10:35:16.078493    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:16.084017    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 10:35:16.084146    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0816 10:35:16.093172    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0816 10:35:16.093200    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0816 10:35:16.093254    5136 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0816 10:35:16.093274    5136 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:16.093314    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:16.094433    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.096449    5136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0816 10:35:16.096465    5136 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:16.096494    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:16.098223    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0816 10:35:16.098247    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0816 10:35:16.128641    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 10:35:16.128767    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0816 10:35:16.130610    5136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0816 10:35:16.130641    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0816 10:35:16.130648    5136 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.130784    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.142166    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0816 10:35:16.142201    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0816 10:35:16.142221    5136 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0816 10:35:16.142227    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0816 10:35:16.161651    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0816 10:35:16.163432    5136 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 10:35:16.163547    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:16.211414    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0816 10:35:16.222338    5136 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0816 10:35:16.222370    5136 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:16.222437    5136 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:16.256466    5136 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0816 10:35:16.256492    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0816 10:35:16.279592    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 10:35:16.279731    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 10:35:16.354784    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0816 10:35:16.354827    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 10:35:16.354853    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0816 10:35:16.423920    5136 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 10:35:16.423938    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0816 10:35:16.749534    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 10:35:16.749560    5136 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0816 10:35:16.749567    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0816 10:35:16.903236    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0816 10:35:16.903274    5136 cache_images.go:92] duration metric: took 1.376027458s to LoadCachedImages
	W0816 10:35:16.903321    5136 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0816 10:35:16.903335    5136 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0816 10:35:16.903386    5136 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-403000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:35:16.903451    5136 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:35:16.917404    5136 cni.go:84] Creating CNI manager for ""
	I0816 10:35:16.917416    5136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:35:16.917421    5136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:35:16.917432    5136 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-403000 NodeName:stopped-upgrade-403000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:35:16.917504    5136 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-403000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:35:16.917560    5136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0816 10:35:16.920653    5136 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:35:16.920678    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 10:35:16.923913    5136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0816 10:35:16.928941    5136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:35:16.933971    5136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0816 10:35:16.939355    5136 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0816 10:35:16.940666    5136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:35:16.944531    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:17.006607    5136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:35:17.011858    5136 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000 for IP: 10.0.2.15
	I0816 10:35:17.011864    5136 certs.go:194] generating shared ca certs ...
	I0816 10:35:17.011872    5136 certs.go:226] acquiring lock for ca certs: {Name:mkd0f48b500cbb75fb3e9a7c625fdb17e399313f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.012028    5136 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.key
	I0816 10:35:17.012079    5136 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.key
	I0816 10:35:17.012084    5136 certs.go:256] generating profile certs ...
	I0816 10:35:17.012157    5136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.key
	I0816 10:35:17.012174    5136 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f
	I0816 10:35:17.012185    5136 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0816 10:35:17.135094    5136 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f ...
	I0816 10:35:17.135106    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f: {Name:mk27c02f3c1b53070f9e389840434de4c108251c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.136522    5136 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f ...
	I0816 10:35:17.136536    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f: {Name:mkba004d73043a9e35c85af6ee5e0accff6107ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.136698    5136 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f -> /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt
	I0816 10:35:17.136851    5136 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f -> /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key
	I0816 10:35:17.137010    5136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/proxy-client.key
	I0816 10:35:17.137153    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054.pem (1338 bytes)
	W0816 10:35:17.137184    5136 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054_empty.pem, impossibly tiny 0 bytes
	I0816 10:35:17.137191    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:35:17.137211    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem (1082 bytes)
	I0816 10:35:17.137235    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:35:17.137255    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem (1679 bytes)
	I0816 10:35:17.137297    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem (1708 bytes)
	I0816 10:35:17.137672    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:35:17.144521    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 10:35:17.151488    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:35:17.158494    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 10:35:17.165109    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 10:35:17.172165    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:35:17.179630    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:35:17.186847    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 10:35:17.193579    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem --> /usr/share/ca-certificates/20542.pem (1708 bytes)
	I0816 10:35:17.200461    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:35:17.207664    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054.pem --> /usr/share/ca-certificates/2054.pem (1338 bytes)
	I0816 10:35:17.214452    5136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:35:17.219361    5136 ssh_runner.go:195] Run: openssl version
	I0816 10:35:17.221234    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:35:17.224536    5136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:35:17.225949    5136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:35:17.225971    5136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:35:17.227670    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:35:17.230813    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2054.pem && ln -fs /usr/share/ca-certificates/2054.pem /etc/ssl/certs/2054.pem"
	I0816 10:35:17.233642    5136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2054.pem
	I0816 10:35:17.235037    5136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:55 /usr/share/ca-certificates/2054.pem
	I0816 10:35:17.235056    5136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2054.pem
	I0816 10:35:17.236926    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2054.pem /etc/ssl/certs/51391683.0"
	I0816 10:35:17.240386    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20542.pem && ln -fs /usr/share/ca-certificates/20542.pem /etc/ssl/certs/20542.pem"
	I0816 10:35:17.243808    5136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20542.pem
	I0816 10:35:17.245229    5136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:55 /usr/share/ca-certificates/20542.pem
	I0816 10:35:17.245253    5136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20542.pem
	I0816 10:35:17.246952    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:35:17.249809    5136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:35:17.251181    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:35:17.253166    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:35:17.255056    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:35:17.257778    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:35:17.259518    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:35:17.261482    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:35:17.263281    5136 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:35:17.263354    5136 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:35:17.273574    5136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:35:17.276819    5136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 10:35:17.276826    5136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 10:35:17.276852    5136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 10:35:17.280212    5136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:35:17.280510    5136 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-403000" does not appear in /Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:35:17.280611    5136 kubeconfig.go:62] /Users/jenkins/minikube-integration/19461-1189/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-403000" cluster setting kubeconfig missing "stopped-upgrade-403000" context setting]
	I0816 10:35:17.280797    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/kubeconfig: {Name:mk2e4f2b039616ddb85ed20d74e703a928518229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.281211    5136 kapi.go:59] client config for stopped-upgrade-403000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a3d610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:35:17.281539    5136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 10:35:17.284428    5136 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-403000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0816 10:35:17.284436    5136 kubeadm.go:1160] stopping kube-system containers ...
	I0816 10:35:17.284475    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:35:17.294968    5136 docker.go:483] Stopping containers: [9533d81142ad 5db973a16a19 c96cfddd42cc 44ae055ab8e7 197bec61c229 b623ce8dc29a a0e70d78570e dee52b6f306c]
	I0816 10:35:17.295033    5136 ssh_runner.go:195] Run: docker stop 9533d81142ad 5db973a16a19 c96cfddd42cc 44ae055ab8e7 197bec61c229 b623ce8dc29a a0e70d78570e dee52b6f306c
	I0816 10:35:17.305287    5136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 10:35:17.311096    5136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:35:17.313787    5136 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:35:17.313793    5136 kubeadm.go:157] found existing configuration files:
	
	I0816 10:35:17.313812    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0816 10:35:17.316563    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:35:17.316598    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:35:17.319440    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0816 10:35:17.322063    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:35:17.322089    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:35:17.324661    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0816 10:35:17.327560    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:35:17.327583    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:35:17.330102    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0816 10:35:17.332633    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:35:17.332659    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:35:17.335481    5136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:35:17.338147    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.359893    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.670815    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.806486    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.834907    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.861088    5136 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:35:17.861162    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:35:13.688977    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:13.689058    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:13.701468    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:13.701535    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:13.712082    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:13.712148    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:13.722579    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:13.722641    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:13.733856    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:13.733930    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:13.743939    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:13.744000    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:13.757985    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:13.758058    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:13.768414    4989 logs.go:276] 0 containers: []
	W0816 10:35:13.768425    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:13.768479    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:13.779348    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:13.779370    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:13.779376    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:13.812591    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:13.812605    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:13.824376    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:13.824388    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:13.839214    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:13.839227    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:13.851091    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:13.851103    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:13.879714    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:13.879724    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:13.893730    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:13.893739    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:13.906279    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:13.906289    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:13.921107    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:13.921118    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:13.964441    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:13.964455    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:13.970436    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:13.970449    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:13.992205    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:13.992220    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:14.036964    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:14.036979    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:14.049301    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:14.049313    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:14.061356    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:14.061368    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:14.073487    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:14.073498    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:14.096192    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:14.096200    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:16.612126    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:18.363227    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:35:18.863233    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:35:18.867597    5136 api_server.go:72] duration metric: took 1.006530334s to wait for apiserver process to appear ...
	I0816 10:35:18.867608    5136 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:35:18.867621    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:21.614267    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:21.614374    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:21.629121    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:21.629197    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:21.640007    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:21.640079    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:21.652993    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:21.653067    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:21.663754    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:21.663831    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:21.678716    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:21.678791    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:21.689849    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:21.689919    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:21.700431    4989 logs.go:276] 0 containers: []
	W0816 10:35:21.700443    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:21.700509    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:21.711059    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:21.711077    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:21.711083    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:21.723321    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:21.723334    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:21.737958    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:21.737969    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:21.750782    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:21.750793    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:21.775469    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:21.775480    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:21.794125    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:21.794137    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:21.807398    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:21.807410    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:21.850799    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:21.850819    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:21.868465    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:21.868475    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:21.909972    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:21.909987    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:21.927964    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:21.927977    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:21.932528    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:21.932537    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:21.973308    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:21.973320    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:21.989322    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:21.989335    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:22.003301    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:22.003312    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:22.019473    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:22.019485    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:22.032460    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:22.032474    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:23.869667    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:23.869712    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:24.553257    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:28.869954    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:28.870010    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:29.555369    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:29.555637    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:29.583021    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:29.583139    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:29.600552    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:29.600644    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:29.613826    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:29.613901    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:29.629170    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:29.629240    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:29.639691    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:29.639767    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:29.651940    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:29.652010    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:29.661660    4989 logs.go:276] 0 containers: []
	W0816 10:35:29.661669    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:29.661728    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:29.672818    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:29.672835    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:29.672840    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:29.689853    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:29.689865    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:29.712665    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:29.712673    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:29.724310    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:29.724320    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:29.736256    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:29.736267    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:29.769333    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:29.769342    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:29.785612    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:29.785623    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:29.799139    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:29.799149    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:29.805529    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:29.805538    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:29.840561    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:29.840574    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:29.855619    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:29.855633    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:29.867611    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:29.867627    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:29.878890    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:29.878900    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:29.893123    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:29.893135    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:29.907345    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:29.907356    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:29.920015    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:29.920029    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:29.962891    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:29.962904    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:32.479417    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:33.870368    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:33.870410    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:37.481575    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:37.481687    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:37.492702    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:37.492780    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:37.503002    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:37.503073    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:37.514121    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:37.514192    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:37.524528    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:37.524607    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:37.534881    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:37.534948    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:37.546053    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:37.546121    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:37.557015    4989 logs.go:276] 0 containers: []
	W0816 10:35:37.557026    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:37.557081    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:37.567430    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:37.567448    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:37.567454    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:37.590070    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:37.590082    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:37.601342    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:37.601352    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:37.641609    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:37.641621    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:37.646218    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:37.646225    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:37.659652    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:37.659670    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:37.673904    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:37.673914    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:37.685468    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:37.685477    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:37.696874    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:37.696884    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:37.731374    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:37.731385    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:37.745810    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:37.745821    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:37.761509    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:37.761519    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:37.774415    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:37.774429    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:37.788424    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:37.788437    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:37.799736    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:37.799748    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:37.835168    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:37.835181    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:37.859563    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:37.859572    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:38.870881    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:38.870917    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:40.373512    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:43.871579    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:43.871643    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:45.375700    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:45.375846    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:35:45.387556    4989 logs.go:276] 2 containers: [3fd79eaf68a8 a7a83a83ddc9]
	I0816 10:35:45.387635    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:35:45.402173    4989 logs.go:276] 2 containers: [41bcbba53d2a 1cf502de6722]
	I0816 10:35:45.402242    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:35:45.412214    4989 logs.go:276] 1 containers: [2b9d57ed42bf]
	I0816 10:35:45.412392    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:35:45.424666    4989 logs.go:276] 2 containers: [af6f1e1bca6a 359ce0ff7bb4]
	I0816 10:35:45.424733    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:35:45.434996    4989 logs.go:276] 1 containers: [b742a17388eb]
	I0816 10:35:45.435063    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:35:45.446012    4989 logs.go:276] 2 containers: [80b6b31b902b 74067d4f196b]
	I0816 10:35:45.446076    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:35:45.456580    4989 logs.go:276] 0 containers: []
	W0816 10:35:45.456594    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:35:45.456650    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:35:45.470789    4989 logs.go:276] 2 containers: [bd7196361880 6b701c4dbe92]
	I0816 10:35:45.470807    4989 logs.go:123] Gathering logs for etcd [1cf502de6722] ...
	I0816 10:35:45.470813    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf502de6722"
	I0816 10:35:45.485078    4989 logs.go:123] Gathering logs for kube-scheduler [359ce0ff7bb4] ...
	I0816 10:35:45.485090    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 359ce0ff7bb4"
	I0816 10:35:45.500171    4989 logs.go:123] Gathering logs for kube-proxy [b742a17388eb] ...
	I0816 10:35:45.500181    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b742a17388eb"
	I0816 10:35:45.512802    4989 logs.go:123] Gathering logs for storage-provisioner [bd7196361880] ...
	I0816 10:35:45.512815    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd7196361880"
	I0816 10:35:45.529676    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:35:45.529690    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:35:45.534156    4989 logs.go:123] Gathering logs for kube-apiserver [a7a83a83ddc9] ...
	I0816 10:35:45.534163    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7a83a83ddc9"
	I0816 10:35:45.566977    4989 logs.go:123] Gathering logs for coredns [2b9d57ed42bf] ...
	I0816 10:35:45.566990    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b9d57ed42bf"
	I0816 10:35:45.578263    4989 logs.go:123] Gathering logs for kube-controller-manager [74067d4f196b] ...
	I0816 10:35:45.578276    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74067d4f196b"
	I0816 10:35:45.592603    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:35:45.592617    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:35:45.604163    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:35:45.604176    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:35:45.644445    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:35:45.644455    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:35:45.680386    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:35:45.680396    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:35:45.702928    4989 logs.go:123] Gathering logs for kube-apiserver [3fd79eaf68a8] ...
	I0816 10:35:45.702938    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fd79eaf68a8"
	I0816 10:35:45.717277    4989 logs.go:123] Gathering logs for kube-controller-manager [80b6b31b902b] ...
	I0816 10:35:45.717288    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80b6b31b902b"
	I0816 10:35:45.737758    4989 logs.go:123] Gathering logs for storage-provisioner [6b701c4dbe92] ...
	I0816 10:35:45.737769    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b701c4dbe92"
	I0816 10:35:45.748894    4989 logs.go:123] Gathering logs for etcd [41bcbba53d2a] ...
	I0816 10:35:45.748907    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bcbba53d2a"
	I0816 10:35:45.762722    4989 logs.go:123] Gathering logs for kube-scheduler [af6f1e1bca6a] ...
	I0816 10:35:45.762735    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6f1e1bca6a"
	I0816 10:35:48.872365    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:48.872410    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:48.277384    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:53.279623    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:53.279719    4989 kubeadm.go:597] duration metric: took 4m4.410973666s to restartPrimaryControlPlane
	W0816 10:35:53.279785    4989 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 10:35:53.279811    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 10:35:54.266982    4989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:35:54.273273    4989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:35:54.276307    4989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:35:54.279338    4989 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:35:54.279345    4989 kubeadm.go:157] found existing configuration files:
	
	I0816 10:35:54.279369    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf
	I0816 10:35:54.282281    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:35:54.282311    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:35:54.284821    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf
	I0816 10:35:54.287603    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:35:54.287630    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:35:54.290497    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf
	I0816 10:35:54.292742    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:35:54.292764    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:35:54.295262    4989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf
	I0816 10:35:54.297749    4989 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:35:54.297772    4989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:35:54.300077    4989 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:35:54.316738    4989 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 10:35:54.316771    4989 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:35:54.368069    4989 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:35:54.368264    4989 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:35:54.368365    4989 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 10:35:54.421762    4989 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:35:54.429910    4989 out.go:235]   - Generating certificates and keys ...
	I0816 10:35:54.429943    4989 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:35:54.429972    4989 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:35:54.430028    4989 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 10:35:54.430135    4989 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 10:35:54.430173    4989 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 10:35:54.430273    4989 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 10:35:54.430398    4989 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 10:35:54.430482    4989 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 10:35:54.430529    4989 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 10:35:54.430579    4989 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 10:35:54.430599    4989 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 10:35:54.430649    4989 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:35:54.449244    4989 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:35:54.532985    4989 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:35:54.600376    4989 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:35:54.750472    4989 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:35:54.777983    4989 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:35:54.778376    4989 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:35:54.778418    4989 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:35:54.865559    4989 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:35:53.873511    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:53.873533    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:54.869748    4989 out.go:235]   - Booting up control plane ...
	I0816 10:35:54.869828    4989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:35:54.869904    4989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:35:54.869945    4989 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:35:54.876813    4989 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:35:54.877778    4989 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 10:35:59.380100    4989 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502247 seconds
	I0816 10:35:59.380161    4989 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:35:59.383976    4989 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:35:59.914866    4989 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:35:59.915383    4989 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-260000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:36:00.419896    4989 kubeadm.go:310] [bootstrap-token] Using token: ikrzsf.2vzddhz1mwsv220r
	I0816 10:36:00.426134    4989 out.go:235]   - Configuring RBAC rules ...
	I0816 10:36:00.426202    4989 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:36:00.426264    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:36:00.435628    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:36:00.436773    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:36:00.437556    4989 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:36:00.438379    4989 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:36:00.441395    4989 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:36:00.618578    4989 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:36:00.832982    4989 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:36:00.833367    4989 kubeadm.go:310] 
	I0816 10:36:00.833399    4989 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:36:00.833405    4989 kubeadm.go:310] 
	I0816 10:36:00.833449    4989 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:36:00.833471    4989 kubeadm.go:310] 
	I0816 10:36:00.833483    4989 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:36:00.833562    4989 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:36:00.833592    4989 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:36:00.833595    4989 kubeadm.go:310] 
	I0816 10:36:00.833655    4989 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:36:00.833659    4989 kubeadm.go:310] 
	I0816 10:36:00.833679    4989 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:36:00.833683    4989 kubeadm.go:310] 
	I0816 10:36:00.833742    4989 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:36:00.833774    4989 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:36:00.833860    4989 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:36:00.833865    4989 kubeadm.go:310] 
	I0816 10:36:00.833918    4989 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:36:00.833954    4989 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:36:00.833956    4989 kubeadm.go:310] 
	I0816 10:36:00.834001    4989 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ikrzsf.2vzddhz1mwsv220r \
	I0816 10:36:00.834068    4989 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 \
	I0816 10:36:00.834079    4989 kubeadm.go:310] 	--control-plane 
	I0816 10:36:00.834081    4989 kubeadm.go:310] 
	I0816 10:36:00.834122    4989 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:36:00.834125    4989 kubeadm.go:310] 
	I0816 10:36:00.834163    4989 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ikrzsf.2vzddhz1mwsv220r \
	I0816 10:36:00.834221    4989 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 
	I0816 10:36:00.834282    4989 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:36:00.834289    4989 cni.go:84] Creating CNI manager for ""
	I0816 10:36:00.834298    4989 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:36:00.838601    4989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 10:36:00.846654    4989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 10:36:00.849562    4989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 10:36:00.854470    4989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:36:00.854536    4989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-260000 minikube.k8s.io/updated_at=2024_08_16T10_36_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=running-upgrade-260000 minikube.k8s.io/primary=true
	I0816 10:36:00.854555    4989 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:36:00.858786    4989 ops.go:34] apiserver oom_adj: -16
	I0816 10:36:00.909334    4989 kubeadm.go:1113] duration metric: took 54.833625ms to wait for elevateKubeSystemPrivileges
	I0816 10:36:00.909453    4989 kubeadm.go:394] duration metric: took 4m12.060091791s to StartCluster
	I0816 10:36:00.909465    4989 settings.go:142] acquiring lock: {Name:mkd2048b6677d6c95a407663b8dc541f5fa54e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:36:00.909548    4989 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:36:00.909928    4989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/kubeconfig: {Name:mk2e4f2b039616ddb85ed20d74e703a928518229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:36:00.910123    4989 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:36:00.910133    4989 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:36:00.910221    4989 config.go:182] Loaded profile config "running-upgrade-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:36:00.910174    4989 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-260000"
	I0816 10:36:00.910221    4989 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-260000"
	I0816 10:36:00.910247    4989 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-260000"
	I0816 10:36:00.910250    4989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-260000"
	W0816 10:36:00.910253    4989 addons.go:243] addon storage-provisioner should already be in state true
	I0816 10:36:00.910280    4989 host.go:66] Checking if "running-upgrade-260000" exists ...
	I0816 10:36:00.911170    4989 kapi.go:59] client config for running-upgrade-260000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/running-upgrade-260000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106681610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:36:00.911291    4989 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-260000"
	W0816 10:36:00.911295    4989 addons.go:243] addon default-storageclass should already be in state true
	I0816 10:36:00.911303    4989 host.go:66] Checking if "running-upgrade-260000" exists ...
	I0816 10:36:00.914572    4989 out.go:177] * Verifying Kubernetes components...
	I0816 10:36:00.914915    4989 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:36:00.917991    4989 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:36:00.917997    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	I0816 10:36:00.920549    4989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:58.875144    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:58.875279    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:00.924599    4989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:36:00.928629    4989 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:36:00.928635    4989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:36:00.928641    4989 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/running-upgrade-260000/id_rsa Username:docker}
	I0816 10:36:01.012151    4989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:36:01.017232    4989 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:36:01.017271    4989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:36:01.022016    4989 api_server.go:72] duration metric: took 111.884166ms to wait for apiserver process to appear ...
	I0816 10:36:01.022024    4989 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:36:01.022032    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:01.035158    4989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:36:01.093150    4989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:36:01.370097    4989 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:36:01.370109    4989 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:36:03.877438    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:03.877485    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:06.022450    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:06.022532    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:08.879728    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:08.879797    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:11.024054    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:11.024114    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:13.882049    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:13.882071    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:16.024423    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:16.024476    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:18.884141    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:18.884296    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:18.895316    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:18.895398    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:18.905890    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:18.905969    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:18.916129    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:18.916193    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:18.930094    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:18.930169    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:18.940515    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:18.940582    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:18.951345    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:18.951412    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:18.961940    5136 logs.go:276] 0 containers: []
	W0816 10:36:18.961952    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:18.962015    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:18.972086    5136 logs.go:276] 0 containers: []
	W0816 10:36:18.972098    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:18.972105    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:18.972110    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:18.985818    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:18.985832    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:18.996814    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:18.996826    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:19.008781    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:19.008793    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:19.022814    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:19.022827    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:19.035517    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:19.035533    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:19.074659    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:19.074672    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:19.152463    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:19.152478    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:19.195952    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:19.195969    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:19.213450    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:19.213465    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:19.225323    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:19.225333    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:19.249379    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:19.249389    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:19.253370    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:19.253379    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:19.267480    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:19.267496    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:19.282532    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:19.282543    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:21.800419    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:21.024931    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:21.024989    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:26.802742    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:26.802911    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:26.817089    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:26.817162    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:26.828850    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:26.828926    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:26.840934    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:26.841025    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:26.852261    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:26.852335    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:26.863078    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:26.863145    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:26.876691    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:26.876748    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:26.887556    5136 logs.go:276] 0 containers: []
	W0816 10:36:26.887570    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:26.887628    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:26.897732    5136 logs.go:276] 0 containers: []
	W0816 10:36:26.897744    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:26.897751    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:26.897756    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:26.909758    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:26.909769    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:26.922380    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:26.922391    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:26.935572    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:26.935586    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:26.977768    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:26.977782    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:26.982111    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:26.982119    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:26.997092    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:26.997102    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:27.012039    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:27.012049    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:27.023795    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:27.023808    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:27.060743    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:27.060754    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:27.078509    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:27.078521    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:27.104713    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:27.104729    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:27.121425    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:27.121437    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:27.159609    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:27.159620    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:27.171553    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:27.171564    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:26.025622    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:26.025645    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:31.026320    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:31.026370    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 10:36:31.371806    4989 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 10:36:31.374994    4989 out.go:177] * Enabled addons: storage-provisioner
	I0816 10:36:29.687405    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:31.382939    4989 addons.go:510] duration metric: took 30.473455667s for enable addons: enabled=[storage-provisioner]
	I0816 10:36:34.689683    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:34.689914    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:34.713249    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:34.713345    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:34.729575    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:34.729661    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:34.742462    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:34.742536    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:34.753764    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:34.753835    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:34.763945    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:34.764013    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:34.774255    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:34.774330    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:34.784617    5136 logs.go:276] 0 containers: []
	W0816 10:36:34.784629    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:34.784686    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:34.795039    5136 logs.go:276] 0 containers: []
	W0816 10:36:34.795051    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:34.795058    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:34.795064    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:34.806342    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:34.806355    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:34.819454    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:34.819471    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:34.831486    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:34.831499    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:34.845838    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:34.845851    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:34.884682    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:34.884696    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:34.900468    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:34.900483    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:34.913931    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:34.913945    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:34.939441    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:34.939450    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:34.977989    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:34.978002    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:34.982225    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:34.982231    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:35.001145    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:35.001156    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:35.037853    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:35.037867    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:35.052611    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:35.052627    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:35.064467    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:35.064478    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:37.585648    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:36.027262    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:36.027298    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:42.588085    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:42.588484    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:42.623458    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:42.623580    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:42.642716    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:42.642812    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:42.665553    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:42.665628    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:42.677761    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:42.677829    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:42.688097    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:42.688165    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:42.699195    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:42.699263    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:42.710411    5136 logs.go:276] 0 containers: []
	W0816 10:36:42.710422    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:42.710478    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:42.730192    5136 logs.go:276] 0 containers: []
	W0816 10:36:42.730203    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:42.730211    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:42.730216    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:42.748082    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:42.748095    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:42.764596    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:42.764608    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:42.779298    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:42.779310    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:42.816325    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:42.816339    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:42.833818    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:42.833836    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:42.845474    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:42.845486    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:42.859582    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:42.859592    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:42.898197    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:42.898208    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:42.902268    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:42.902275    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:42.915982    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:42.915994    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:42.930782    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:42.930796    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:41.028468    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:41.028513    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:42.966402    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:42.966413    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:42.982906    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:42.982915    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:42.994271    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:42.994283    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:45.521376    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:46.028793    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:46.028880    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:50.523726    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:50.523992    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:50.555053    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:50.555182    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:50.574030    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:50.574125    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:50.588244    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:50.588320    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:50.600630    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:50.600706    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:50.611379    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:50.611444    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:50.622219    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:50.622286    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:50.631949    5136 logs.go:276] 0 containers: []
	W0816 10:36:50.631962    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:50.632019    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:50.642781    5136 logs.go:276] 0 containers: []
	W0816 10:36:50.642793    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:50.642800    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:50.642805    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:50.664324    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:50.664335    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:50.689294    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:50.689305    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:50.728072    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:50.728083    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:50.742979    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:50.742989    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:50.760194    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:50.760207    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:50.764811    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:50.764819    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:50.779171    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:50.779186    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:50.815109    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:50.815123    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:50.831052    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:50.831069    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:50.846144    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:50.846158    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:50.858136    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:50.858152    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:50.869893    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:50.869902    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:50.883722    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:50.883733    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:50.920547    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:50.920558    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:51.029428    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:51.029456    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:53.436352    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:56.029760    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:56.029824    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:58.437398    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:58.437568    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:58.451289    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:58.451378    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:58.462213    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:58.462282    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:58.472468    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:58.472542    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:58.483318    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:58.483388    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:58.494292    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:58.494356    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:58.504694    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:58.504762    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:58.517868    5136 logs.go:276] 0 containers: []
	W0816 10:36:58.517881    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:58.517943    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:58.528593    5136 logs.go:276] 0 containers: []
	W0816 10:36:58.528611    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:58.528619    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:58.528624    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:58.554667    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:58.554678    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:58.559155    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:58.559164    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:58.572908    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:58.572921    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:58.588645    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:58.588656    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:58.606584    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:58.606594    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:58.642875    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:58.642883    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:58.679524    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:58.679535    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:58.698310    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:58.698321    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:58.710755    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:58.710764    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:58.723506    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:58.723518    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:58.760988    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:58.761000    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:58.775282    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:58.775293    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:58.786811    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:58.786821    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:58.798370    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:58.798384    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:01.320347    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:01.031920    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:01.032045    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:01.054646    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:01.054718    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:01.065066    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:01.065131    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:01.075503    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:01.075568    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:01.085529    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:01.085589    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:01.096351    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:01.096422    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:01.106772    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:01.106837    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:01.117132    4989 logs.go:276] 0 containers: []
	W0816 10:37:01.117143    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:01.117193    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:01.127273    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:01.127286    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:01.127292    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:01.138565    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:01.138575    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:01.162447    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:01.162454    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:01.173588    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:01.173600    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:01.211094    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:01.211107    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:01.222821    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:01.222831    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:01.237719    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:01.237729    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:01.251235    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:01.251244    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:01.262352    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:01.262364    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:01.273821    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:01.273830    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:01.288827    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:01.288838    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:01.306715    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:01.306725    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:01.342610    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:01.342617    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:06.322562    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:06.322939    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:06.360795    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:06.360939    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:06.381318    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:06.381408    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:06.396765    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:06.396845    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:06.409418    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:06.409489    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:06.420535    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:06.420609    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:06.431604    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:06.431676    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:06.446251    5136 logs.go:276] 0 containers: []
	W0816 10:37:06.446262    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:06.446324    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:06.465888    5136 logs.go:276] 0 containers: []
	W0816 10:37:06.465901    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:06.465908    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:06.465916    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:06.503233    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:06.503243    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:06.516047    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:06.516058    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:06.550226    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:06.550238    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:06.566288    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:06.566300    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:06.584505    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:06.584517    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:06.589100    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:06.589108    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:06.627677    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:06.627689    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:06.643228    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:06.643243    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:06.660526    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:06.660537    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:06.674400    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:06.674413    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:06.688321    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:06.688330    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:06.714919    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:06.714932    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:06.729541    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:06.729554    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:06.744910    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:06.744920    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:03.849379    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:09.259710    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:08.851639    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:08.851770    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:08.863766    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:08.863860    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:08.874412    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:08.874485    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:08.884688    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:08.884771    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:08.895211    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:08.895274    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:08.905678    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:08.905750    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:08.916350    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:08.916428    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:08.926495    4989 logs.go:276] 0 containers: []
	W0816 10:37:08.926507    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:08.926573    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:08.937016    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:08.937030    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:08.937036    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:08.948494    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:08.948505    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:08.960149    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:08.960160    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:08.977274    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:08.977284    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:08.991899    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:08.991910    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:09.026629    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:09.026639    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:09.031217    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:09.031224    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:09.071123    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:09.071134    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:09.087635    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:09.087646    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:09.100145    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:09.100157    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:09.115986    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:09.115996    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:09.128469    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:09.128482    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:09.144029    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:09.144040    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:11.669805    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:14.261848    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:14.262035    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:14.280179    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:14.280274    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:14.293640    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:14.293723    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:14.305903    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:14.305967    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:14.316710    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:14.316771    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:14.327301    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:14.327369    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:14.338000    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:14.338061    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:14.348804    5136 logs.go:276] 0 containers: []
	W0816 10:37:14.348815    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:14.348867    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:14.359141    5136 logs.go:276] 0 containers: []
	W0816 10:37:14.359151    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:14.359159    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:14.359164    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:14.393610    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:14.393622    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:14.431246    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:14.431257    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:14.446932    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:14.446945    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:14.465726    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:14.465739    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:14.470254    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:14.470264    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:14.484874    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:14.484884    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:14.510150    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:14.510157    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:14.521524    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:14.521536    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:14.534589    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:14.534601    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:14.548889    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:14.548902    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:14.560512    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:14.560523    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:14.597728    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:14.597736    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:14.612086    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:14.612100    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:14.623873    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:14.623888    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:17.140182    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:16.672494    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:16.672939    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:16.725383    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:16.725507    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:16.748210    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:16.748293    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:16.766127    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:16.766197    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:16.778992    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:16.779063    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:16.789888    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:16.789967    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:16.800981    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:16.801050    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:16.811508    4989 logs.go:276] 0 containers: []
	W0816 10:37:16.811518    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:16.811572    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:16.822453    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:16.822469    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:16.822477    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:16.886989    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:16.887008    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:16.902384    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:16.902396    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:16.920487    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:16.920500    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:16.933384    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:16.933396    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:16.956853    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:16.956862    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:16.968092    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:16.968108    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:17.000724    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:17.000732    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:17.004801    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:17.004809    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:17.016475    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:17.016490    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:17.028220    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:17.028231    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:17.043081    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:17.043093    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:17.054740    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:17.054750    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:22.142338    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:22.142678    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:22.174851    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:22.174979    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:22.193386    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:22.193481    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:22.207645    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:22.207720    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:22.219784    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:22.219858    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:22.230958    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:22.231028    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:22.241973    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:22.242045    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:22.252217    5136 logs.go:276] 0 containers: []
	W0816 10:37:22.252229    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:22.252284    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:22.262869    5136 logs.go:276] 0 containers: []
	W0816 10:37:22.262881    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:22.262889    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:22.262894    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:22.282011    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:22.282021    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:22.300184    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:22.300196    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:22.324312    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:22.324321    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:22.338817    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:22.338832    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:22.360511    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:22.360524    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:22.394264    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:22.394275    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:22.409977    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:22.409987    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:22.421120    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:22.421133    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:22.435007    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:22.435019    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:22.474624    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:22.474634    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:22.513512    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:22.513522    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:22.525277    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:22.525288    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:22.537795    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:22.537809    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:22.549701    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:22.549714    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:19.574757    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:25.055797    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:24.577142    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:24.577488    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:24.619366    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:24.619502    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:24.640990    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:24.641089    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:24.655958    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:24.656034    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:24.668310    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:24.668373    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:24.679386    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:24.679463    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:24.690605    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:24.690666    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:24.702192    4989 logs.go:276] 0 containers: []
	W0816 10:37:24.702207    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:24.702259    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:24.712613    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:24.712629    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:24.712635    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:24.754233    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:24.754245    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:24.769393    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:24.769403    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:24.789085    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:24.789099    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:24.800888    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:24.800899    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:24.818233    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:24.818245    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:24.822790    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:24.822800    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:24.836939    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:24.836952    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:24.848374    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:24.848387    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:24.867821    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:24.867833    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:24.879459    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:24.879471    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:24.905182    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:24.905194    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:24.917217    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:24.917229    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:27.454330    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:30.057977    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:30.058166    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:30.077368    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:30.077461    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:30.091326    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:30.091402    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:30.103227    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:30.103301    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:30.118203    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:30.118302    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:30.128773    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:30.128844    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:30.139888    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:30.139963    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:30.150688    5136 logs.go:276] 0 containers: []
	W0816 10:37:30.150698    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:30.150761    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:30.165630    5136 logs.go:276] 0 containers: []
	W0816 10:37:30.165648    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:30.165657    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:30.165663    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:30.180015    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:30.180025    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:30.195856    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:30.195866    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:30.219923    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:30.219933    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:30.232016    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:30.232032    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:30.250827    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:30.250837    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:30.287889    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:30.287900    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:30.291730    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:30.291739    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:30.337713    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:30.337725    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:30.351671    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:30.351684    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:30.363325    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:30.363336    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:30.377715    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:30.377724    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:30.415690    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:30.415704    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:30.429562    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:30.429572    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:30.450992    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:30.451006    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:32.456213    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:32.456451    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:32.476734    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:32.476832    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:32.491298    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:32.491379    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:32.503264    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:32.503326    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:32.513924    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:32.513985    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:32.524313    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:32.524383    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:32.534563    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:32.534633    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:32.545000    4989 logs.go:276] 0 containers: []
	W0816 10:37:32.545012    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:32.545068    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:32.555589    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:32.555605    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:32.555611    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:32.624771    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:32.624785    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:32.645331    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:32.645343    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:32.656768    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:32.656778    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:32.668191    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:32.668203    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:32.679944    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:32.679956    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:32.699680    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:32.699693    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:32.733189    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:32.733198    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:32.738582    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:32.738592    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:32.752516    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:32.752529    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:32.767370    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:32.767383    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:32.782162    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:32.782172    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:32.805539    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:32.805550    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:32.964524    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:35.319149    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:37.966682    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:37.966916    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:37.994590    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:37.994713    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:38.011622    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:38.011702    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:38.026818    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:38.026907    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:38.038220    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:38.038292    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:38.049175    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:38.049255    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:38.060178    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:38.060251    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:38.070248    5136 logs.go:276] 0 containers: []
	W0816 10:37:38.070259    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:38.070317    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:38.080283    5136 logs.go:276] 0 containers: []
	W0816 10:37:38.080295    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:38.080303    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:38.080338    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:38.095128    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:38.095138    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:38.107412    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:38.107423    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:38.126190    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:38.126200    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:38.140058    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:38.140071    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:38.163716    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:38.163724    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:38.202030    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:38.202046    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:38.220276    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:38.220285    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:38.234227    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:38.234242    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:38.246063    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:38.246073    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:38.286162    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:38.286182    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:38.290849    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:38.290856    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:38.302057    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:38.302072    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:38.340305    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:38.340313    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:38.351774    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:38.351789    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:40.867449    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:40.321530    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:40.321856    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:40.362804    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:40.362944    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:40.385369    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:40.385484    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:40.400504    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:40.400588    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:40.413020    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:40.413086    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:40.424330    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:40.424404    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:40.435258    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:40.435331    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:40.447114    4989 logs.go:276] 0 containers: []
	W0816 10:37:40.447128    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:40.447191    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:40.457616    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:40.457631    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:40.457635    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:40.475019    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:40.475028    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:40.486980    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:40.486990    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:40.521478    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:40.521487    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:40.526272    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:40.526281    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:40.537973    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:40.537983    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:40.557876    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:40.557890    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:40.572520    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:40.572529    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:40.597121    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:40.597132    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:40.608472    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:40.608482    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:40.644244    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:40.644255    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:40.658708    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:40.658720    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:40.672278    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:40.672290    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:45.867795    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:45.867976    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:45.885513    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:45.885618    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:45.899092    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:45.899162    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:45.910430    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:45.910499    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:45.920999    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:45.921074    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:45.931575    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:45.931638    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:45.942195    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:45.942256    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:45.952922    5136 logs.go:276] 0 containers: []
	W0816 10:37:45.952934    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:45.952991    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:45.963472    5136 logs.go:276] 0 containers: []
	W0816 10:37:45.963484    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:45.963495    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:45.963501    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:45.984766    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:45.984777    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:46.026854    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:46.026866    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:46.039507    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:46.039519    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:46.057351    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:46.057360    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:46.071668    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:46.071678    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:46.083479    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:46.083490    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:46.107534    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:46.107543    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:46.119212    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:46.119224    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:46.123912    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:46.123919    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:46.159451    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:46.159466    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:46.174805    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:46.174815    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:46.188876    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:46.188887    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:46.200127    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:46.200137    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:46.237285    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:46.237293    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:43.191155    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:48.754599    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:48.192307    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:48.192536    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:48.214109    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:48.214199    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:48.228919    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:48.228990    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:48.240975    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:48.241041    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:48.256511    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:48.256583    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:48.266553    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:48.266620    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:48.280831    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:48.280897    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:48.290724    4989 logs.go:276] 0 containers: []
	W0816 10:37:48.290734    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:48.290794    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:48.301156    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:48.301172    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:48.301177    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:48.334676    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:48.334685    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:48.371039    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:48.371051    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:48.385224    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:48.385237    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:48.397014    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:48.397025    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:48.409429    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:48.409440    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:48.420623    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:48.420633    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:48.444595    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:48.444605    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:48.449229    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:48.449238    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:48.466944    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:48.466957    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:48.478650    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:48.478662    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:48.503555    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:48.503565    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:48.524214    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:48.524224    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:51.041883    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:53.753204    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:53.753749    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:53.775996    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:53.776082    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:53.793563    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:53.793640    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:53.804686    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:53.804755    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:53.815464    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:53.815537    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:53.826314    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:53.826376    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:53.836826    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:53.836896    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:53.847245    5136 logs.go:276] 0 containers: []
	W0816 10:37:53.847255    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:53.847307    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:53.857659    5136 logs.go:276] 0 containers: []
	W0816 10:37:53.857671    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:53.857679    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:53.857688    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:53.897866    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:53.897878    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:53.911784    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:53.911795    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:53.927445    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:53.927455    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:53.939531    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:53.939544    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:53.943998    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:53.944006    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:53.981298    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:53.981313    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:53.995349    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:53.995362    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:54.007157    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:54.007169    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:54.031916    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:54.031930    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:54.071263    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:54.071282    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:54.085817    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:54.085828    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:54.118890    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:54.118900    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:54.133792    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:54.133803    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:54.146044    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:54.146059    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:56.660712    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:56.041903    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:56.042149    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:56.063797    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:37:56.063895    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:56.081651    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:37:56.081722    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:56.093922    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:37:56.093998    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:56.104911    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:37:56.104977    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:56.115582    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:37:56.115649    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:56.125832    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:37:56.125896    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:56.135955    4989 logs.go:276] 0 containers: []
	W0816 10:37:56.135965    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:56.136021    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:56.146149    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:37:56.146163    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:37:56.146167    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:37:56.157601    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:37:56.157612    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:37:56.169612    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:37:56.169625    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:37:56.187233    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:37:56.187241    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:37:56.202130    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:56.202142    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:56.236975    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:37:56.236984    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:37:56.250985    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:37:56.250996    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:37:56.264758    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:37:56.264769    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:37:56.280642    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:37:56.280655    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:37:56.292253    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:56.292265    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:56.315723    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:37:56.315731    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:56.327087    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:56.327100    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:56.331599    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:56.331609    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:01.661445    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:01.661669    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:01.684541    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:01.684632    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:01.698698    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:01.698777    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:01.715199    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:01.715267    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:01.726127    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:01.726198    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:01.737330    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:01.737400    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:01.747938    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:01.748005    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:01.758052    5136 logs.go:276] 0 containers: []
	W0816 10:38:01.758065    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:01.758125    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:01.767992    5136 logs.go:276] 0 containers: []
	W0816 10:38:01.768004    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:01.768012    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:01.768017    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:01.772532    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:01.772542    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:01.809943    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:01.809955    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:01.825543    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:01.825553    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:01.843601    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:01.843611    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:01.867773    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:01.867786    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:01.906051    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:01.906059    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:01.917657    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:01.917670    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:01.932025    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:01.932035    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:01.946228    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:01.946242    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:01.957754    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:01.957768    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:01.969801    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:01.969813    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:01.981103    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:01.981114    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:02.018765    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:02.018779    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:02.033202    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:02.033213    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:58.869556    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:04.549500    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:03.870392    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:03.870553    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:03.884655    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:03.884736    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:03.897907    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:03.897977    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:03.910499    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:38:03.910569    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:03.920595    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:03.920658    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:03.931101    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:03.931174    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:03.941606    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:03.941668    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:03.951841    4989 logs.go:276] 0 containers: []
	W0816 10:38:03.951855    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:03.951913    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:03.962441    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:03.962456    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:03.962460    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:03.997164    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:03.997171    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:04.032867    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:04.032879    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:04.047427    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:04.047440    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:04.059896    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:04.059907    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:04.075139    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:04.075148    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:04.087074    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:04.087085    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:04.111896    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:04.111904    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:04.116437    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:04.116442    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:04.132131    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:04.132141    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:04.144242    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:04.144252    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:04.156459    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:04.156471    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:04.174452    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:04.174465    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:06.685901    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:09.550819    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:09.551069    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:09.574995    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:09.575096    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:09.593326    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:09.593401    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:09.606507    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:09.606578    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:09.617466    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:09.617539    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:09.630756    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:09.630824    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:09.641905    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:09.641974    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:09.654258    5136 logs.go:276] 0 containers: []
	W0816 10:38:09.654271    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:09.654330    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:09.670248    5136 logs.go:276] 0 containers: []
	W0816 10:38:09.670259    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:09.670268    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:09.670274    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:09.681935    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:09.681947    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:09.719414    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:09.719425    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:09.734627    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:09.734643    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:09.748255    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:09.748265    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:09.759404    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:09.759415    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:09.780708    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:09.780722    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:09.794650    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:09.794659    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:09.818567    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:09.818575    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:09.857734    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:09.857745    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:09.894468    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:09.894479    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:09.906132    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:09.906142    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:09.921389    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:09.921401    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:09.925622    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:09.925628    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:09.943664    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:09.943675    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:12.457833    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:11.687330    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:11.687591    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:11.710841    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:11.710952    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:11.730464    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:11.730550    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:11.742776    4989 logs.go:276] 2 containers: [22be3ed5da22 95f216c8e7c0]
	I0816 10:38:11.742848    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:11.753760    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:11.753822    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:11.769157    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:11.769227    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:11.779778    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:11.779843    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:11.793013    4989 logs.go:276] 0 containers: []
	W0816 10:38:11.793024    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:11.793089    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:11.803221    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:11.803238    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:11.803243    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:11.814964    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:11.814977    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:11.832757    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:11.832770    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:11.844405    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:11.844415    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:11.869662    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:11.869672    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:11.883927    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:11.883940    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:11.895520    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:11.895530    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:11.913711    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:11.913721    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:11.928105    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:11.928118    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:11.939926    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:11.939940    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:11.951159    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:11.951173    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:11.986091    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:11.986100    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:11.990929    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:11.990934    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:17.459434    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:17.459643    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:17.477855    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:17.477952    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:17.492055    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:17.492135    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:17.503669    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:17.503739    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:17.514412    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:17.514478    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:17.525092    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:17.525156    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:17.540493    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:17.540558    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:17.550623    5136 logs.go:276] 0 containers: []
	W0816 10:38:17.550638    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:17.550697    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:17.561027    5136 logs.go:276] 0 containers: []
	W0816 10:38:17.561039    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:17.561046    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:17.561052    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:17.596297    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:17.596312    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:17.610693    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:17.610704    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:17.649284    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:17.649295    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:17.653793    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:17.653802    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:17.693003    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:17.693016    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:17.704314    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:17.704327    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:17.715702    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:17.715712    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:17.727426    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:17.727442    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:17.741511    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:17.741523    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:17.761517    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:17.761530    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:17.779090    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:17.779101    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:17.793186    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:17.793195    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:17.804387    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:17.804398    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:17.818017    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:17.818027    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:14.545428    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:20.343544    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:19.547054    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:19.547256    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:19.566705    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:19.566794    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:19.581419    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:19.581502    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:19.593524    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:19.593603    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:19.607002    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:19.607076    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:19.621570    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:19.621632    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:19.631799    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:19.631864    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:19.641745    4989 logs.go:276] 0 containers: []
	W0816 10:38:19.641756    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:19.641810    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:19.652639    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:19.652656    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:19.652661    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:19.689429    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:19.689442    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:19.706525    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:19.706536    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:19.719740    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:19.719750    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:19.735497    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:19.735510    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:19.746686    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:19.746701    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:19.781849    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:19.781858    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:19.795701    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:19.795713    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:19.807172    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:19.807182    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:19.818904    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:19.818913    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:19.842479    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:19.842487    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:19.854004    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:19.854015    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:19.858497    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:19.858506    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:19.872395    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:19.872404    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:19.884472    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:19.884482    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:22.404353    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:25.345590    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:25.345815    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:25.373847    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:25.373950    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:25.388421    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:25.388501    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:25.401036    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:25.401106    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:25.423185    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:25.423262    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:25.437516    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:25.437581    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:25.452210    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:25.452277    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:25.462772    5136 logs.go:276] 0 containers: []
	W0816 10:38:25.462784    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:25.462841    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:25.473354    5136 logs.go:276] 0 containers: []
	W0816 10:38:25.473370    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:25.473377    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:25.473382    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:25.512488    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:25.512498    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:25.526804    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:25.526817    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:25.538876    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:25.538888    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:25.551127    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:25.551138    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:25.590409    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:25.590433    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:25.595370    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:25.595383    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:25.610430    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:25.610441    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:25.621907    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:25.621920    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:25.659089    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:25.659102    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:25.672411    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:25.672423    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:25.687778    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:25.687788    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:25.705276    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:25.705287    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:25.719049    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:25.719062    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:25.742130    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:25.742139    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:27.405081    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:27.405425    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:27.435770    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:27.435897    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:27.454552    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:27.454643    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:27.468553    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:27.468625    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:27.480176    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:27.480248    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:27.490759    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:27.490831    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:27.501478    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:27.501544    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:27.512017    4989 logs.go:276] 0 containers: []
	W0816 10:38:27.512028    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:27.512087    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:27.522706    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:27.522721    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:27.522727    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:27.537036    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:27.537048    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:27.549143    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:27.549158    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:27.583778    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:27.583786    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:27.595854    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:27.595867    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:27.622101    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:27.622115    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:27.640380    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:27.640392    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:27.645566    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:27.645573    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:27.680474    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:27.680485    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:27.699026    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:27.699036    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:27.721271    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:27.721281    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:27.733120    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:27.733132    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:27.746925    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:27.746936    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:27.757982    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:27.757991    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:27.773386    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:27.773398    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:28.258308    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:30.286606    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:33.260429    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:33.260696    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:33.297526    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:33.297641    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:33.313704    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:33.313775    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:33.325561    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:33.325635    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:33.336121    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:33.336189    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:33.347021    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:33.347089    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:33.357351    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:33.357409    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:33.367405    5136 logs.go:276] 0 containers: []
	W0816 10:38:33.367418    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:33.367476    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:33.377494    5136 logs.go:276] 0 containers: []
	W0816 10:38:33.377511    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:33.377518    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:33.377527    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:33.382093    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:33.382100    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:33.423524    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:33.423534    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:33.437343    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:33.437354    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:33.454595    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:33.454608    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:33.472140    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:33.472152    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:33.494669    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:33.494677    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:33.512574    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:33.512585    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:33.550408    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:33.550419    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:33.561664    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:33.561675    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:33.573194    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:33.573205    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:33.591441    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:33.591450    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:33.630306    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:33.630317    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:33.644815    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:33.644825    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:33.659388    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:33.659399    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:36.173662    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:35.288876    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:35.289197    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:35.324527    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:35.324629    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:35.347363    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:35.347444    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:35.361400    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:35.361475    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:35.373279    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:35.373346    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:35.384450    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:35.384514    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:35.395385    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:35.395457    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:35.405918    4989 logs.go:276] 0 containers: []
	W0816 10:38:35.405930    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:35.405986    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:35.416813    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:35.416831    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:35.416836    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:35.452831    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:35.452843    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:35.467807    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:35.467816    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:35.482137    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:35.482147    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:35.494013    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:35.494025    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:35.498518    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:35.498528    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:35.510185    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:35.510197    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:35.522119    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:35.522132    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:35.542473    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:35.542482    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:35.554598    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:35.554608    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:35.566645    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:35.566659    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:35.578547    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:35.578562    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:35.596414    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:35.596424    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:35.608241    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:35.608250    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:35.643530    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:35.643538    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:41.175827    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:41.176032    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:41.193879    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:41.193978    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:41.207833    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:41.207907    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:41.219066    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:41.219135    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:41.233042    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:41.233115    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:41.243601    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:41.243673    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:41.254118    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:41.254188    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:41.264601    5136 logs.go:276] 0 containers: []
	W0816 10:38:41.264616    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:41.264678    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:41.274728    5136 logs.go:276] 0 containers: []
	W0816 10:38:41.274739    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:41.274748    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:41.274753    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:41.311238    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:41.311248    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:41.360663    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:41.360685    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:41.375361    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:41.375374    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:41.389960    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:41.389976    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:41.403992    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:41.404008    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:41.418097    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:41.418111    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:41.429870    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:41.429882    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:41.445189    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:41.445203    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:41.449637    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:41.449643    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:41.483360    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:41.483375    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:41.500661    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:41.500677    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:41.523536    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:41.523545    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:41.535903    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:41.535915    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:41.555668    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:41.555684    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:38.171280    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:44.069176    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:43.173432    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:43.173895    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:43.226344    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:43.226472    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:43.243592    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:43.243672    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:43.256858    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:43.256935    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:43.267999    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:43.268071    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:43.281716    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:43.281793    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:43.292827    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:43.292897    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:43.303560    4989 logs.go:276] 0 containers: []
	W0816 10:38:43.303573    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:43.303630    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:43.314704    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:43.314723    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:43.314729    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:43.319300    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:43.319308    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:43.331411    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:43.331423    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:43.346653    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:43.346666    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:43.364313    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:43.364322    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:43.379112    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:43.379125    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:43.414136    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:43.414153    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:43.426793    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:43.426807    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:43.451251    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:43.451258    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:43.463108    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:43.463119    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:43.501388    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:43.501402    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:43.515489    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:43.515498    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:43.527092    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:43.527103    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:43.539195    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:43.539208    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:43.553700    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:43.553713    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:46.066900    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:49.071346    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:49.071562    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:49.090704    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:49.090794    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:49.104660    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:49.104731    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:49.115935    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:49.116006    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:49.126212    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:49.126279    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:49.137074    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:49.137148    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:49.147667    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:49.147730    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:49.157431    5136 logs.go:276] 0 containers: []
	W0816 10:38:49.157442    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:49.157498    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:49.167451    5136 logs.go:276] 0 containers: []
	W0816 10:38:49.167462    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:49.167470    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:49.167476    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:49.210870    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:49.210887    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:49.225108    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:49.225119    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:49.237360    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:49.237371    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:49.249314    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:49.249329    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:49.262932    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:49.262945    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:49.296880    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:49.296896    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:49.311529    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:49.311540    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:49.334183    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:49.334190    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:49.338152    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:49.338161    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:49.354336    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:49.354347    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:49.392725    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:49.392739    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:49.407175    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:49.407191    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:49.420322    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:49.420333    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:49.437530    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:49.437540    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:51.952529    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:51.067215    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:51.067444    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:51.102398    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:51.102518    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:51.119518    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:51.119597    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:51.132818    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:51.132894    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:51.144188    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:51.144263    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:51.154355    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:51.154423    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:51.165005    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:51.165069    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:51.175406    4989 logs.go:276] 0 containers: []
	W0816 10:38:51.175419    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:51.175480    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:51.185657    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:51.185674    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:51.185680    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:51.219782    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:51.219793    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:51.231515    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:51.231524    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:51.250826    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:51.250835    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:51.275632    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:51.275644    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:51.287423    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:51.287435    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:51.301742    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:51.301752    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:51.313298    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:51.313308    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:51.331016    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:51.331026    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:51.366213    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:51.366222    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:51.379889    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:51.379899    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:51.391664    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:51.391679    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:51.396525    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:51.396531    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:51.407819    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:51.407833    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:51.419440    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:51.419451    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:56.954768    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:56.955136    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:56.989762    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:56.989892    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:57.008668    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:57.008765    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:57.023034    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:57.023114    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:57.035602    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:57.035675    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:57.047477    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:57.047550    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:57.058403    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:57.058476    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:57.068736    5136 logs.go:276] 0 containers: []
	W0816 10:38:57.068753    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:57.068817    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:57.079277    5136 logs.go:276] 0 containers: []
	W0816 10:38:57.079291    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:57.079298    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:57.079304    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:57.090422    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:57.090436    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:57.102322    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:57.102333    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:57.115749    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:57.115759    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:57.130663    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:57.130676    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:57.142646    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:57.142660    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:57.154113    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:57.154125    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:57.171530    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:57.171541    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:57.206107    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:57.206120    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:57.243917    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:57.243929    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:57.258075    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:57.258086    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:57.273596    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:57.273606    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:57.292478    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:57.292489    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:57.315539    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:57.315554    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:57.353238    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:57.353247    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:53.933655    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:59.858685    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:58.935834    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:58.935974    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:58.953337    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:38:58.953425    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:58.964771    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:38:58.964854    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:58.976408    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:38:58.976484    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:58.987167    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:38:58.987250    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:58.998162    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:38:58.998230    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:59.008303    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:38:59.008374    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:59.018269    4989 logs.go:276] 0 containers: []
	W0816 10:38:59.018280    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:59.018345    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:59.028620    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:38:59.028638    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:59.028644    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:59.063455    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:38:59.063467    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:38:59.085887    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:38:59.085899    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:59.097881    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:59.097895    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:59.102266    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:38:59.102276    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:38:59.117374    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:38:59.117388    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:38:59.135918    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:38:59.135929    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:38:59.148710    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:38:59.148724    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:38:59.160363    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:59.160376    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:59.195440    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:38:59.195452    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:38:59.211544    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:38:59.211556    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:38:59.229073    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:38:59.229086    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:38:59.244287    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:38:59.244297    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:38:59.256068    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:38:59.256078    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:38:59.278445    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:59.278454    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:01.804923    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:04.860155    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:04.860362    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:04.885582    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:39:04.885700    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:04.902073    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:39:04.902166    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:04.916246    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:39:04.916321    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:04.927142    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:39:04.927207    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:04.937856    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:39:04.937926    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:04.951971    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:39:04.952038    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:04.961857    5136 logs.go:276] 0 containers: []
	W0816 10:39:04.961869    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:04.961926    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:04.971660    5136 logs.go:276] 0 containers: []
	W0816 10:39:04.971675    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:39:04.971682    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:04.971687    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:05.008544    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:39:05.008555    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:39:05.021875    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:05.021885    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:05.026109    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:39:05.026116    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:39:05.040600    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:39:05.040610    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:39:05.055278    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:39:05.055290    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:39:05.070743    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:39:05.070754    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:39:05.088957    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:05.088968    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:05.112603    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:05.112611    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:05.146752    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:39:05.146763    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:39:05.192542    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:39:05.192556    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:39:05.207035    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:39:05.207046    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:39:05.222336    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:39:05.222347    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:39:05.234758    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:39:05.234771    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:05.246344    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:39:05.246356    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:39:07.760709    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:06.807222    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:06.807447    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:06.826002    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:06.826088    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:06.839542    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:06.839616    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:06.851361    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:06.851429    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:06.863101    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:06.863162    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:06.874168    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:06.874241    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:06.884876    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:06.884945    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:06.897064    4989 logs.go:276] 0 containers: []
	W0816 10:39:06.897074    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:06.897131    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:06.907277    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:06.907294    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:06.907300    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:06.921027    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:06.921038    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:06.932839    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:06.932850    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:06.944158    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:06.944169    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:06.961915    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:06.961924    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:06.998596    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:06.998609    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:07.009907    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:07.009918    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:07.021646    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:07.021657    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:07.038623    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:07.038633    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:07.071830    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:07.071841    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:07.076417    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:07.076423    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:07.090871    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:07.090881    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:07.102674    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:07.102683    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:07.117296    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:07.117305    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:07.129173    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:07.129189    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:12.761966    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:12.762185    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:12.786461    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:39:12.786550    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:12.798023    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:39:12.798096    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:12.808684    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:39:12.808765    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:12.819515    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:39:12.819585    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:12.830395    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:39:12.830463    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:12.843062    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:39:12.843134    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:12.853564    5136 logs.go:276] 0 containers: []
	W0816 10:39:12.853580    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:12.853644    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:12.868251    5136 logs.go:276] 0 containers: []
	W0816 10:39:12.868262    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:39:12.868269    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:39:12.868274    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:39:12.882098    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:39:12.882109    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:39:12.899458    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:12.899468    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:12.923329    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:12.923338    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:09.655215    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:12.961552    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:39:12.961560    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:39:12.998861    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:39:12.998871    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:39:13.010973    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:39:13.010984    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:39:13.025907    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:39:13.025918    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:39:13.037920    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:39:13.037933    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:39:13.057790    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:13.057805    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:13.061907    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:39:13.061913    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:39:13.078883    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:39:13.078893    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:39:13.093779    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:39:13.093793    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:39:13.108242    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:39:13.108258    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:13.119841    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:13.119853    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:15.656022    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:14.657447    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:14.657709    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:14.687842    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:14.687956    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:14.705559    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:14.705647    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:14.719525    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:14.719603    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:14.738468    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:14.738537    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:14.748846    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:14.748919    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:14.759718    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:14.759782    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:14.774252    4989 logs.go:276] 0 containers: []
	W0816 10:39:14.774263    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:14.774323    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:14.784621    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:14.784640    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:14.784647    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:14.798197    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:14.798206    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:14.809846    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:14.809860    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:14.824771    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:14.824783    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:14.836870    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:14.836882    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:14.854395    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:14.854409    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:14.878325    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:14.878334    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:14.914256    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:14.914268    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:14.927330    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:14.927345    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:14.969173    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:14.969191    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:14.973809    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:14.973815    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:14.987382    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:14.987395    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:14.998878    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:14.998888    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:15.010308    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:15.010318    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:15.022812    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:15.022825    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:17.536231    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:20.657610    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:20.657780    5136 kubeadm.go:597] duration metric: took 4m3.396963125s to restartPrimaryControlPlane
	W0816 10:39:20.657859    5136 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 10:39:20.657905    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 10:39:21.598842    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:39:21.603939    5136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:39:21.606871    5136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:39:21.609610    5136 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:39:21.609619    5136 kubeadm.go:157] found existing configuration files:
	
	I0816 10:39:21.609639    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0816 10:39:21.612053    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:39:21.612072    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:39:21.614783    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0816 10:39:21.617164    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:39:21.617185    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:39:21.620146    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0816 10:39:21.623295    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:39:21.623317    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:39:21.626556    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0816 10:39:21.629272    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:39:21.629297    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:39:21.632178    5136 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:39:21.648662    5136 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 10:39:21.648696    5136 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:39:21.696553    5136 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:39:21.696644    5136 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:39:21.696699    5136 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 10:39:21.752680    5136 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:39:21.756839    5136 out.go:235]   - Generating certificates and keys ...
	I0816 10:39:21.756963    5136 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:39:21.757015    5136 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:39:21.757052    5136 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 10:39:21.757085    5136 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 10:39:21.757165    5136 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 10:39:21.757198    5136 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 10:39:21.757229    5136 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 10:39:21.757261    5136 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 10:39:21.757306    5136 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 10:39:21.757369    5136 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 10:39:21.757393    5136 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 10:39:21.757421    5136 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:39:21.844193    5136 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:39:21.917649    5136 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:39:22.091503    5136 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:39:22.294140    5136 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:39:22.326802    5136 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:39:22.327303    5136 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:39:22.327356    5136 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:39:22.415399    5136 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:39:22.423685    5136 out.go:235]   - Booting up control plane ...
	I0816 10:39:22.423735    5136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:39:22.423777    5136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:39:22.423812    5136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:39:22.423853    5136 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:39:22.423924    5136 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 10:39:22.538499    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:22.538604    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:22.549367    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:22.549441    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:22.559544    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:22.559614    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:22.570568    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:22.570643    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:22.581495    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:22.581556    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:22.592408    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:22.592479    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:22.604497    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:22.604574    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:22.616158    4989 logs.go:276] 0 containers: []
	W0816 10:39:22.616170    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:22.616234    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:22.627313    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:22.627332    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:22.627338    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:22.640266    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:22.640280    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:22.655338    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:22.655348    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:22.672817    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:22.672828    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:22.685204    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:22.685217    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:22.701558    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:22.701568    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:22.742072    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:22.742083    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:22.753587    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:22.753596    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:22.764885    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:22.764897    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:22.776514    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:22.776526    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:22.788768    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:22.788779    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:22.793035    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:22.793043    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:22.807217    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:22.807228    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:22.818782    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:22.818792    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:22.842013    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:22.842020    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:26.929633    5136 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505730 seconds
	I0816 10:39:26.929714    5136 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:39:26.934595    5136 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:39:27.444340    5136 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:39:27.444564    5136 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-403000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:39:27.949128    5136 kubeadm.go:310] [bootstrap-token] Using token: sa33xc.0uhd5ykuoldhwzac
	I0816 10:39:25.376556    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:27.955477    5136 out.go:235]   - Configuring RBAC rules ...
	I0816 10:39:27.955534    5136 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:39:27.955590    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:39:27.957355    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:39:27.962114    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:39:27.963400    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:39:27.964291    5136 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:39:27.967240    5136 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:39:28.169769    5136 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:39:28.352524    5136 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:39:28.352900    5136 kubeadm.go:310] 
	I0816 10:39:28.352929    5136 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:39:28.352958    5136 kubeadm.go:310] 
	I0816 10:39:28.352999    5136 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:39:28.353003    5136 kubeadm.go:310] 
	I0816 10:39:28.353031    5136 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:39:28.353065    5136 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:39:28.353102    5136 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:39:28.353105    5136 kubeadm.go:310] 
	I0816 10:39:28.353135    5136 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:39:28.353138    5136 kubeadm.go:310] 
	I0816 10:39:28.353160    5136 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:39:28.353164    5136 kubeadm.go:310] 
	I0816 10:39:28.353189    5136 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:39:28.353228    5136 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:39:28.353274    5136 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:39:28.353277    5136 kubeadm.go:310] 
	I0816 10:39:28.353323    5136 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:39:28.353361    5136 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:39:28.353365    5136 kubeadm.go:310] 
	I0816 10:39:28.353413    5136 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sa33xc.0uhd5ykuoldhwzac \
	I0816 10:39:28.353470    5136 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 \
	I0816 10:39:28.353480    5136 kubeadm.go:310] 	--control-plane 
	I0816 10:39:28.353483    5136 kubeadm.go:310] 
	I0816 10:39:28.353529    5136 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:39:28.353535    5136 kubeadm.go:310] 
	I0816 10:39:28.353572    5136 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sa33xc.0uhd5ykuoldhwzac \
	I0816 10:39:28.353620    5136 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 
	I0816 10:39:28.356203    5136 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:39:28.356295    5136 cni.go:84] Creating CNI manager for ""
	I0816 10:39:28.356305    5136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:39:28.360055    5136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 10:39:28.367051    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 10:39:28.369909    5136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 10:39:28.374620    5136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:39:28.374676    5136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:39:28.374678    5136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-403000 minikube.k8s.io/updated_at=2024_08_16T10_39_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=stopped-upgrade-403000 minikube.k8s.io/primary=true
	I0816 10:39:28.379841    5136 ops.go:34] apiserver oom_adj: -16
	I0816 10:39:28.417340    5136 kubeadm.go:1113] duration metric: took 42.695792ms to wait for elevateKubeSystemPrivileges
	I0816 10:39:28.417357    5136 kubeadm.go:394] duration metric: took 4m11.170290708s to StartCluster
	I0816 10:39:28.417368    5136 settings.go:142] acquiring lock: {Name:mkd2048b6677d6c95a407663b8dc541f5fa54e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:39:28.417460    5136 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:39:28.417923    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/kubeconfig: {Name:mk2e4f2b039616ddb85ed20d74e703a928518229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:39:28.418151    5136 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:39:28.418159    5136 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:39:28.418197    5136 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-403000"
	I0816 10:39:28.418209    5136 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-403000"
	W0816 10:39:28.418212    5136 addons.go:243] addon storage-provisioner should already be in state true
	I0816 10:39:28.418224    5136 host.go:66] Checking if "stopped-upgrade-403000" exists ...
	I0816 10:39:28.418217    5136 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-403000"
	I0816 10:39:28.418239    5136 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:39:28.418297    5136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-403000"
	I0816 10:39:28.422054    5136 out.go:177] * Verifying Kubernetes components...
	I0816 10:39:28.422931    5136 kapi.go:59] client config for stopped-upgrade-403000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a3d610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:39:28.426304    5136 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-403000"
	W0816 10:39:28.426309    5136 addons.go:243] addon default-storageclass should already be in state true
	I0816 10:39:28.426316    5136 host.go:66] Checking if "stopped-upgrade-403000" exists ...
	I0816 10:39:28.426816    5136 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:39:28.426822    5136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:39:28.426827    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:39:28.430034    5136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:39:28.434024    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:39:28.438055    5136 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:39:28.438061    5136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:39:28.438068    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:39:28.528239    5136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:39:28.533536    5136 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:39:28.533576    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:39:28.537428    5136 api_server.go:72] duration metric: took 119.266959ms to wait for apiserver process to appear ...
	I0816 10:39:28.537436    5136 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:39:28.537444    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:28.593132    5136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:39:28.611764    5136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:39:28.959480    5136 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:39:28.959496    5136 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:39:30.378624    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:30.378802    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:30.390926    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:30.391001    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:30.402372    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:30.402450    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:30.414201    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:30.414278    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:30.424785    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:30.424856    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:30.434896    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:30.434959    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:30.445742    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:30.445807    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:30.455374    4989 logs.go:276] 0 containers: []
	W0816 10:39:30.455386    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:30.455440    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:30.466722    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:30.466739    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:30.466745    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:30.473805    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:30.473812    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:30.510295    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:30.510307    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:30.522956    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:30.522965    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:30.537198    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:30.537207    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:30.570831    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:30.570841    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:30.588928    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:30.588939    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:30.601489    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:30.601501    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:30.616287    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:30.616300    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:30.639916    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:30.639929    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:30.651300    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:30.651313    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:30.663045    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:30.663056    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:30.674934    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:30.674946    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:30.691675    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:30.691688    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:30.709790    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:30.709800    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:33.539458    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:33.539500    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:33.235121    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:38.540151    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:38.540167    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:38.235965    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:38.236063    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:38.246921    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:38.246982    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:38.257641    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:38.257715    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:38.268819    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:38.268894    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:38.279719    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:38.279789    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:38.290502    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:38.290566    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:38.301751    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:38.301817    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:38.311955    4989 logs.go:276] 0 containers: []
	W0816 10:39:38.311968    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:38.312022    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:38.323122    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:38.323139    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:38.323147    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:38.335232    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:38.335243    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:38.349653    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:38.349664    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:38.383838    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:38.383846    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:38.388552    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:38.388557    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:38.402559    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:38.402568    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:38.417174    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:38.417184    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:38.428587    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:38.428600    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:38.440903    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:38.440913    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:38.476300    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:38.476310    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:38.488456    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:38.488466    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:38.513426    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:38.513438    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:38.525680    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:38.525693    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:38.537879    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:38.537893    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:38.553694    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:38.553705    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:41.073572    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:43.540463    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:43.540491    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:46.075688    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:46.075886    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:46.096186    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:46.096276    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:46.110928    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:46.110999    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:46.126615    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:46.126680    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:46.138063    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:46.138131    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:46.148104    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:46.148167    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:46.158747    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:46.158811    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:46.168765    4989 logs.go:276] 0 containers: []
	W0816 10:39:46.168778    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:46.168829    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:46.179183    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:46.179201    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:46.179205    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:46.202194    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:46.202203    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:46.206238    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:46.206247    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:46.244817    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:46.244833    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:46.257301    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:46.257313    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:46.272679    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:46.272691    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:46.284072    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:46.284083    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:46.299408    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:46.299421    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:46.311457    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:46.311471    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:46.346671    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:46.346684    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:46.366428    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:46.366455    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:46.377370    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:46.377378    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:46.392119    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:46.392133    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:46.419240    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:46.419251    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:46.433688    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:46.433702    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:48.540959    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:48.540997    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:48.947476    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:53.541676    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:53.541723    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:53.949731    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:53.950119    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:53.992042    4989 logs.go:276] 1 containers: [ccd266393b75]
	I0816 10:39:53.992177    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:54.013626    4989 logs.go:276] 1 containers: [2e87491cb270]
	I0816 10:39:54.013753    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:54.029864    4989 logs.go:276] 4 containers: [2f29959ae8c6 e885044c45bf 22be3ed5da22 95f216c8e7c0]
	I0816 10:39:54.029945    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:54.041825    4989 logs.go:276] 1 containers: [7ecafdaff2ce]
	I0816 10:39:54.041896    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:54.052368    4989 logs.go:276] 1 containers: [81bcc9d077a7]
	I0816 10:39:54.052429    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:54.063455    4989 logs.go:276] 1 containers: [ffe05557987e]
	I0816 10:39:54.063520    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:54.073795    4989 logs.go:276] 0 containers: []
	W0816 10:39:54.073810    4989 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:54.073863    4989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:54.084098    4989 logs.go:276] 1 containers: [c11edd52065e]
	I0816 10:39:54.084117    4989 logs.go:123] Gathering logs for kube-proxy [81bcc9d077a7] ...
	I0816 10:39:54.084122    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81bcc9d077a7"
	I0816 10:39:54.099033    4989 logs.go:123] Gathering logs for storage-provisioner [c11edd52065e] ...
	I0816 10:39:54.099045    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11edd52065e"
	I0816 10:39:54.110838    4989 logs.go:123] Gathering logs for kube-apiserver [ccd266393b75] ...
	I0816 10:39:54.110850    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ccd266393b75"
	I0816 10:39:54.127022    4989 logs.go:123] Gathering logs for coredns [22be3ed5da22] ...
	I0816 10:39:54.127033    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22be3ed5da22"
	I0816 10:39:54.139519    4989 logs.go:123] Gathering logs for kube-scheduler [7ecafdaff2ce] ...
	I0816 10:39:54.139533    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ecafdaff2ce"
	I0816 10:39:54.154196    4989 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:54.154210    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:54.188246    4989 logs.go:123] Gathering logs for etcd [2e87491cb270] ...
	I0816 10:39:54.188253    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e87491cb270"
	I0816 10:39:54.202270    4989 logs.go:123] Gathering logs for coredns [95f216c8e7c0] ...
	I0816 10:39:54.202279    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95f216c8e7c0"
	I0816 10:39:54.214189    4989 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:54.214199    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:54.218822    4989 logs.go:123] Gathering logs for coredns [e885044c45bf] ...
	I0816 10:39:54.218829    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e885044c45bf"
	I0816 10:39:54.230123    4989 logs.go:123] Gathering logs for kube-controller-manager [ffe05557987e] ...
	I0816 10:39:54.230133    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffe05557987e"
	I0816 10:39:54.253016    4989 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:54.253028    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:54.278025    4989 logs.go:123] Gathering logs for container status ...
	I0816 10:39:54.278036    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:54.289918    4989 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:54.289932    4989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:54.325796    4989 logs.go:123] Gathering logs for coredns [2f29959ae8c6] ...
	I0816 10:39:54.325809    4989 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f29959ae8c6"
	I0816 10:39:56.838675    4989 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:58.543099    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:58.543145    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 10:39:58.961225    5136 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 10:39:58.965638    5136 out.go:177] * Enabled addons: storage-provisioner
	I0816 10:40:01.840854    4989 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:01.845250    4989 out.go:201] 
	W0816 10:40:01.851242    4989 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0816 10:40:01.851250    4989 out.go:270] * 
	W0816 10:40:01.851711    4989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:40:01.867155    4989 out.go:201] 
	I0816 10:39:58.973507    5136 addons.go:510] duration metric: took 30.556081958s for enable addons: enabled=[storage-provisioner]
	I0816 10:40:03.544401    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:03.544518    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:08.546188    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:08.546224    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-08-16 17:31:00 UTC, ends at Fri 2024-08-16 17:40:17 UTC. --
	Aug 16 17:40:02 running-upgrade-260000 dockerd[3232]: time="2024-08-16T17:40:02.132817119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 16 17:40:02 running-upgrade-260000 dockerd[3232]: time="2024-08-16T17:40:02.132890700Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c2bce5a5e18d424b1ee918b5ce1b699c71827587bb644ec7e001fac5f0cc6f28 pid=18835 runtime=io.containerd.runc.v2
	Aug 16 17:40:02 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:02Z" level=error msg="ContainerStats resp: {0x40005fce80 linux}"
	Aug 16 17:40:02 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:02Z" level=error msg="ContainerStats resp: {0x40005fc340 linux}"
	Aug 16 17:40:03 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:03Z" level=error msg="ContainerStats resp: {0x400078a280 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=error msg="ContainerStats resp: {0x400035ba80 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=error msg="ContainerStats resp: {0x400035bc40 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=error msg="ContainerStats resp: {0x4000918400 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=error msg="ContainerStats resp: {0x400078ba00 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=error msg="ContainerStats resp: {0x400078bdc0 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=error msg="ContainerStats resp: {0x4000918e40 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=error msg="ContainerStats resp: {0x4000919600 linux}"
	Aug 16 17:40:04 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 17:40:09 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 17:40:14 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 16 17:40:14 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:14Z" level=error msg="ContainerStats resp: {0x4000855d80 linux}"
	Aug 16 17:40:14 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:14Z" level=error msg="ContainerStats resp: {0x4000416a00 linux}"
	Aug 16 17:40:15 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:15Z" level=error msg="ContainerStats resp: {0x400078b5c0 linux}"
	Aug 16 17:40:16 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:16Z" level=error msg="ContainerStats resp: {0x4000919980 linux}"
	Aug 16 17:40:16 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:16Z" level=error msg="ContainerStats resp: {0x4000919fc0 linux}"
	Aug 16 17:40:16 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:16Z" level=error msg="ContainerStats resp: {0x400035a780 linux}"
	Aug 16 17:40:16 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:16Z" level=error msg="ContainerStats resp: {0x400086cb80 linux}"
	Aug 16 17:40:16 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:16Z" level=error msg="ContainerStats resp: {0x400086ccc0 linux}"
	Aug 16 17:40:16 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:16Z" level=error msg="ContainerStats resp: {0x400086d1c0 linux}"
	Aug 16 17:40:16 running-upgrade-260000 cri-dockerd[3074]: time="2024-08-16T17:40:16Z" level=error msg="ContainerStats resp: {0x400035bb00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c2bce5a5e18d4       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   6f99406e83af0
	71c9d6f34ee3f       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   f595307de437e
	2f29959ae8c67       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6f99406e83af0
	e885044c45bf2       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   f595307de437e
	81bcc9d077a77       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   974a77251ba8a
	c11edd52065e7       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   2542d61815579
	2e87491cb270a       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   dbff70ceed590
	ffe05557987ec       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   0fb0700a7bc4f
	ccd266393b75e       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   05f57d342584a
	7ecafdaff2ce1       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   5a2e1833d4924
	
	
	==> coredns [2f29959ae8c6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:42103->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:58410->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:58957->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:55260->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:36165->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:40088->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:53827->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:36498->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:33424->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3146769321778568997.3180463832968027590. HINFO: read udp 10.244.0.2:50037->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [71c9d6f34ee3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7201721706978108512.7869686912045252407. HINFO: read udp 10.244.0.3:42632->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7201721706978108512.7869686912045252407. HINFO: read udp 10.244.0.3:45564->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7201721706978108512.7869686912045252407. HINFO: read udp 10.244.0.3:37770->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c2bce5a5e18d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3889297666988308500.2940317370265975536. HINFO: read udp 10.244.0.2:56821->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3889297666988308500.2940317370265975536. HINFO: read udp 10.244.0.2:34864->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3889297666988308500.2940317370265975536. HINFO: read udp 10.244.0.2:60483->10.0.2.3:53: i/o timeout
	
	
	==> coredns [e885044c45bf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3912678692906095163.6473535657224122857. HINFO: read udp 10.244.0.3:49215->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3912678692906095163.6473535657224122857. HINFO: read udp 10.244.0.3:32845->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3912678692906095163.6473535657224122857. HINFO: read udp 10.244.0.3:43921->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3912678692906095163.6473535657224122857. HINFO: read udp 10.244.0.3:54513->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-260000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-260000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=running-upgrade-260000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T10_36_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:35:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-260000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:40:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:36:00 +0000   Fri, 16 Aug 2024 17:35:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:36:00 +0000   Fri, 16 Aug 2024 17:35:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:36:00 +0000   Fri, 16 Aug 2024 17:35:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:36:00 +0000   Fri, 16 Aug 2024 17:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-260000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 969f133dd0134d0da540e2baf1740a05
	  System UUID:                969f133dd0134d0da540e2baf1740a05
	  Boot ID:                    ae6e71df-d3b8-44f4-8db1-5da7d956bc72
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-lg6sr                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 coredns-6d4b75cb6d-q5jv9                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 etcd-running-upgrade-260000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-260000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-260000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-grw29                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-260000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-260000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-260000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-260000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-260000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-260000 event: Registered Node running-upgrade-260000 in Controller
	
	
	==> dmesg <==
	[  +1.335726] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.081052] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.088039] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.137263] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085436] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.083934] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.047863] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[ +10.655842] systemd-fstab-generator[1941]: Ignoring "noauto" for root device
	[  +2.833012] systemd-fstab-generator[2218]: Ignoring "noauto" for root device
	[  +0.140864] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +0.092715] systemd-fstab-generator[2263]: Ignoring "noauto" for root device
	[  +0.099933] systemd-fstab-generator[2276]: Ignoring "noauto" for root device
	[ +13.442688] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.215704] systemd-fstab-generator[3028]: Ignoring "noauto" for root device
	[  +0.084302] systemd-fstab-generator[3042]: Ignoring "noauto" for root device
	[  +0.080473] systemd-fstab-generator[3053]: Ignoring "noauto" for root device
	[  +0.096297] systemd-fstab-generator[3067]: Ignoring "noauto" for root device
	[  +2.278973] systemd-fstab-generator[3219]: Ignoring "noauto" for root device
	[  +2.692016] systemd-fstab-generator[3598]: Ignoring "noauto" for root device
	[  +1.252683] systemd-fstab-generator[3914]: Ignoring "noauto" for root device
	[Aug16 17:32] kauditd_printk_skb: 68 callbacks suppressed
	[Aug16 17:35] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.270638] systemd-fstab-generator[11951]: Ignoring "noauto" for root device
	[  +5.660919] systemd-fstab-generator[12535]: Ignoring "noauto" for root device
	[Aug16 17:36] systemd-fstab-generator[12667]: Ignoring "noauto" for root device
	
	
	==> etcd [2e87491cb270] <==
	{"level":"info","ts":"2024-08-16T17:35:56.447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-16T17:35:56.462Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T17:35:56.462Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T17:35:56.462Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T17:35:56.462Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-16T17:35:56.462Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-16T17:35:56.462Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-260000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:35:56.778Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:35:56.779Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-16T17:35:56.779Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:35:56.780Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-16T17:35:56.781Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:35:56.781Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:35:56.781Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:35:56.781Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:35:56.781Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 17:40:18 up 9 min,  0 users,  load average: 0.85, 0.61, 0.30
	Linux running-upgrade-260000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ccd266393b75] <==
	I0816 17:35:58.017143       1 controller.go:611] quota admission added evaluator for: namespaces
	I0816 17:35:58.017505       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0816 17:35:58.072181       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 17:35:58.086865       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0816 17:35:58.086901       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 17:35:58.086945       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0816 17:35:58.087264       1 cache.go:39] Caches are synced for autoregister controller
	I0816 17:35:58.820554       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 17:35:58.980682       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0816 17:35:58.984888       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0816 17:35:58.984915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0816 17:35:59.153108       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 17:35:59.167475       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 17:35:59.221341       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0816 17:35:59.223440       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0816 17:35:59.223842       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 17:35:59.224995       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 17:36:00.121038       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 17:36:00.651761       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 17:36:00.655407       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0816 17:36:00.660377       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 17:36:00.710993       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 17:36:13.676290       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 17:36:13.776755       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 17:36:14.571403       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ffe05557987e] <==
	I0816 17:36:12.990164       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0816 17:36:12.992756       1 range_allocator.go:374] Set node running-upgrade-260000 PodCIDR to [10.244.0.0/24]
	I0816 17:36:13.008765       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0816 17:36:13.022031       1 shared_informer.go:262] Caches are synced for namespace
	I0816 17:36:13.023067       1 shared_informer.go:262] Caches are synced for job
	I0816 17:36:13.030730       1 shared_informer.go:262] Caches are synced for daemon sets
	I0816 17:36:13.070458       1 shared_informer.go:262] Caches are synced for service account
	I0816 17:36:13.122720       1 shared_informer.go:262] Caches are synced for taint
	I0816 17:36:13.122798       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0816 17:36:13.122836       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-260000. Assuming now as a timestamp.
	I0816 17:36:13.122872       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0816 17:36:13.122956       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0816 17:36:13.123083       1 event.go:294] "Event occurred" object="running-upgrade-260000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-260000 event: Registered Node running-upgrade-260000 in Controller"
	I0816 17:36:13.123907       1 shared_informer.go:262] Caches are synced for persistent volume
	I0816 17:36:13.149250       1 shared_informer.go:262] Caches are synced for resource quota
	I0816 17:36:13.170956       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0816 17:36:13.172015       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0816 17:36:13.177742       1 shared_informer.go:262] Caches are synced for resource quota
	I0816 17:36:13.592763       1 shared_informer.go:262] Caches are synced for garbage collector
	I0816 17:36:13.624311       1 shared_informer.go:262] Caches are synced for garbage collector
	I0816 17:36:13.624361       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 17:36:13.677395       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0816 17:36:13.779500       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-grw29"
	I0816 17:36:13.978525       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lg6sr"
	I0816 17:36:13.994549       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-q5jv9"
	
	
	==> kube-proxy [81bcc9d077a7] <==
	I0816 17:36:14.560174       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0816 17:36:14.560198       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0816 17:36:14.560208       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0816 17:36:14.569011       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0816 17:36:14.569022       1 server_others.go:206] "Using iptables Proxier"
	I0816 17:36:14.569042       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0816 17:36:14.569130       1 server.go:661] "Version info" version="v1.24.1"
	I0816 17:36:14.569134       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:36:14.569354       1 config.go:317] "Starting service config controller"
	I0816 17:36:14.569360       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0816 17:36:14.569367       1 config.go:226] "Starting endpoint slice config controller"
	I0816 17:36:14.569369       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0816 17:36:14.570270       1 config.go:444] "Starting node config controller"
	I0816 17:36:14.570278       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0816 17:36:14.670318       1 shared_informer.go:262] Caches are synced for node config
	I0816 17:36:14.670322       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0816 17:36:14.670330       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [7ecafdaff2ce] <==
	W0816 17:35:58.028802       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:35:58.028805       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0816 17:35:58.028817       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:35:58.028820       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0816 17:35:58.028831       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:35:58.028835       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0816 17:35:58.028848       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:35:58.028854       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0816 17:35:58.028894       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:35:58.028902       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0816 17:35:58.028938       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 17:35:58.028988       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0816 17:35:58.029067       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:35:58.029072       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0816 17:35:58.029121       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:35:58.029138       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0816 17:35:58.848000       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:35:58.848071       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0816 17:35:58.980018       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 17:35:58.980375       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0816 17:35:58.980621       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:35:58.980867       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0816 17:35:59.033715       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:35:59.033853       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0816 17:35:59.227607       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-08-16 17:31:00 UTC, ends at Fri 2024-08-16 17:40:18 UTC. --
	Aug 16 17:36:02 running-upgrade-260000 kubelet[12541]: E0816 17:36:02.485913   12541 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-260000\" already exists" pod="kube-system/etcd-running-upgrade-260000"
	Aug 16 17:36:02 running-upgrade-260000 kubelet[12541]: E0816 17:36:02.685742   12541 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-260000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-260000"
	Aug 16 17:36:02 running-upgrade-260000 kubelet[12541]: I0816 17:36:02.882730   12541 request.go:601] Waited for 1.14112205s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 16 17:36:02 running-upgrade-260000 kubelet[12541]: E0816 17:36:02.885668   12541 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-260000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-260000"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.008770   12541 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.009159   12541 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.130039   12541 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.312457   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9f9z\" (UniqueName: \"kubernetes.io/projected/2f379440-337f-4c72-9983-a54b42a7f3fc-kube-api-access-f9f9z\") pod \"storage-provisioner\" (UID: \"2f379440-337f-4c72-9983-a54b42a7f3fc\") " pod="kube-system/storage-provisioner"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.312489   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2f379440-337f-4c72-9983-a54b42a7f3fc-tmp\") pod \"storage-provisioner\" (UID: \"2f379440-337f-4c72-9983-a54b42a7f3fc\") " pod="kube-system/storage-provisioner"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: E0816 17:36:13.419969   12541 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: E0816 17:36:13.419987   12541 projected.go:192] Error preparing data for projected volume kube-api-access-f9f9z for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: E0816 17:36:13.420021   12541 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/2f379440-337f-4c72-9983-a54b42a7f3fc-kube-api-access-f9f9z podName:2f379440-337f-4c72-9983-a54b42a7f3fc nodeName:}" failed. No retries permitted until 2024-08-16 17:36:13.920008142 +0000 UTC m=+13.278161756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f9f9z" (UniqueName: "kubernetes.io/projected/2f379440-337f-4c72-9983-a54b42a7f3fc-kube-api-access-f9f9z") pod "storage-provisioner" (UID: "2f379440-337f-4c72-9983-a54b42a7f3fc") : configmap "kube-root-ca.crt" not found
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.781600   12541 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.921390   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15bf5e5e-3540-4fe0-ade2-9ad1694fcb12-kube-proxy\") pod \"kube-proxy-grw29\" (UID: \"15bf5e5e-3540-4fe0-ade2-9ad1694fcb12\") " pod="kube-system/kube-proxy-grw29"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.921453   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15bf5e5e-3540-4fe0-ade2-9ad1694fcb12-xtables-lock\") pod \"kube-proxy-grw29\" (UID: \"15bf5e5e-3540-4fe0-ade2-9ad1694fcb12\") " pod="kube-system/kube-proxy-grw29"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.921467   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15bf5e5e-3540-4fe0-ade2-9ad1694fcb12-lib-modules\") pod \"kube-proxy-grw29\" (UID: \"15bf5e5e-3540-4fe0-ade2-9ad1694fcb12\") " pod="kube-system/kube-proxy-grw29"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.921479   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgc2x\" (UniqueName: \"kubernetes.io/projected/15bf5e5e-3540-4fe0-ade2-9ad1694fcb12-kube-api-access-mgc2x\") pod \"kube-proxy-grw29\" (UID: \"15bf5e5e-3540-4fe0-ade2-9ad1694fcb12\") " pod="kube-system/kube-proxy-grw29"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.987963   12541 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 17:36:13 running-upgrade-260000 kubelet[12541]: I0816 17:36:13.992760   12541 topology_manager.go:200] "Topology Admit Handler"
	Aug 16 17:36:14 running-upgrade-260000 kubelet[12541]: I0816 17:36:14.123278   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2f7fc0d-cc86-45c0-a8a9-7e5e72cb69a1-config-volume\") pod \"coredns-6d4b75cb6d-q5jv9\" (UID: \"e2f7fc0d-cc86-45c0-a8a9-7e5e72cb69a1\") " pod="kube-system/coredns-6d4b75cb6d-q5jv9"
	Aug 16 17:36:14 running-upgrade-260000 kubelet[12541]: I0816 17:36:14.123299   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcntv\" (UniqueName: \"kubernetes.io/projected/e2f7fc0d-cc86-45c0-a8a9-7e5e72cb69a1-kube-api-access-wcntv\") pod \"coredns-6d4b75cb6d-q5jv9\" (UID: \"e2f7fc0d-cc86-45c0-a8a9-7e5e72cb69a1\") " pod="kube-system/coredns-6d4b75cb6d-q5jv9"
	Aug 16 17:36:14 running-upgrade-260000 kubelet[12541]: I0816 17:36:14.123312   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1831eed5-05b5-4953-9281-5a03cefe39e3-config-volume\") pod \"coredns-6d4b75cb6d-lg6sr\" (UID: \"1831eed5-05b5-4953-9281-5a03cefe39e3\") " pod="kube-system/coredns-6d4b75cb6d-lg6sr"
	Aug 16 17:36:14 running-upgrade-260000 kubelet[12541]: I0816 17:36:14.123327   12541 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ddkg\" (UniqueName: \"kubernetes.io/projected/1831eed5-05b5-4953-9281-5a03cefe39e3-kube-api-access-7ddkg\") pod \"coredns-6d4b75cb6d-lg6sr\" (UID: \"1831eed5-05b5-4953-9281-5a03cefe39e3\") " pod="kube-system/coredns-6d4b75cb6d-lg6sr"
	Aug 16 17:40:02 running-upgrade-260000 kubelet[12541]: I0816 17:40:02.218674   12541 scope.go:110] "RemoveContainer" containerID="22be3ed5da22586ead9eaf402860e9122a6dba0db58722a4bbaf4d7b0b8ab910"
	Aug 16 17:40:02 running-upgrade-260000 kubelet[12541]: I0816 17:40:02.228618   12541 scope.go:110] "RemoveContainer" containerID="95f216c8e7c03bde69517736aaccd094fb89aa0f39ba2c43c5d3cc478ae490bf"
	
	
	==> storage-provisioner [c11edd52065e] <==
	I0816 17:36:14.227214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 17:36:14.231501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 17:36:14.231521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 17:36:14.234668       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 17:36:14.234725       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-260000_2b0c5ef8-5a68-4b2f-ae54-f82a611e1f91!
	I0816 17:36:14.235051       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"09ffc670-a2b2-47f3-9185-f42b397649f9", APIVersion:"v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-260000_2b0c5ef8-5a68-4b2f-ae54-f82a611e1f91 became leader
	I0816 17:36:14.335521       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-260000_2b0c5ef8-5a68-4b2f-ae54-f82a611e1f91!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-260000 -n running-upgrade-260000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-260000 -n running-upgrade-260000: exit status 2 (15.6545645s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-260000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-260000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-260000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-260000: (1.163699083s)
--- FAIL: TestRunningBinaryUpgrade (599.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-629000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-629000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.762841583s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-629000" primary control-plane node in "kubernetes-upgrade-629000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-629000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:33:36.333130    5062 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:33:36.333248    5062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:33:36.333251    5062 out.go:358] Setting ErrFile to fd 2...
	I0816 10:33:36.333254    5062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:33:36.333395    5062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:33:36.334457    5062 out.go:352] Setting JSON to false
	I0816 10:33:36.350983    5062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3779,"bootTime":1723825837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:33:36.351045    5062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:33:36.355709    5062 out.go:177] * [kubernetes-upgrade-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:33:36.363615    5062 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:33:36.363675    5062 notify.go:220] Checking for updates...
	I0816 10:33:36.369574    5062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:33:36.372696    5062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:33:36.375566    5062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:33:36.378556    5062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:33:36.381563    5062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:33:36.384964    5062 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:33:36.385033    5062 config.go:182] Loaded profile config "running-upgrade-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:33:36.385076    5062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:33:36.389627    5062 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:33:36.396567    5062 start.go:297] selected driver: qemu2
	I0816 10:33:36.396573    5062 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:33:36.396578    5062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:33:36.398820    5062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:33:36.401578    5062 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:33:36.404615    5062 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 10:33:36.404646    5062 cni.go:84] Creating CNI manager for ""
	I0816 10:33:36.404652    5062 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 10:33:36.404673    5062 start.go:340] cluster config:
	{Name:kubernetes-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:33:36.408187    5062 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:33:36.415648    5062 out.go:177] * Starting "kubernetes-upgrade-629000" primary control-plane node in "kubernetes-upgrade-629000" cluster
	I0816 10:33:36.419561    5062 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 10:33:36.419578    5062 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 10:33:36.419587    5062 cache.go:56] Caching tarball of preloaded images
	I0816 10:33:36.419635    5062 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:33:36.419640    5062 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 10:33:36.419689    5062 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/kubernetes-upgrade-629000/config.json ...
	I0816 10:33:36.419698    5062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/kubernetes-upgrade-629000/config.json: {Name:mk8153b155d4d744dee4e7bf6fbb919e90ae0db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:33:36.420038    5062 start.go:360] acquireMachinesLock for kubernetes-upgrade-629000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:33:36.420078    5062 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "kubernetes-upgrade-629000"
	I0816 10:33:36.420092    5062 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:33:36.420117    5062 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:33:36.427525    5062 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:33:36.443291    5062 start.go:159] libmachine.API.Create for "kubernetes-upgrade-629000" (driver="qemu2")
	I0816 10:33:36.443312    5062 client.go:168] LocalClient.Create starting
	I0816 10:33:36.443368    5062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:33:36.443400    5062 main.go:141] libmachine: Decoding PEM data...
	I0816 10:33:36.443409    5062 main.go:141] libmachine: Parsing certificate...
	I0816 10:33:36.443440    5062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:33:36.443463    5062 main.go:141] libmachine: Decoding PEM data...
	I0816 10:33:36.443471    5062 main.go:141] libmachine: Parsing certificate...
	I0816 10:33:36.443913    5062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:33:36.570944    5062 main.go:141] libmachine: Creating SSH key...
	I0816 10:33:36.692309    5062 main.go:141] libmachine: Creating Disk image...
	I0816 10:33:36.692314    5062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:33:36.692522    5062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:36.701886    5062 main.go:141] libmachine: STDOUT: 
	I0816 10:33:36.701906    5062 main.go:141] libmachine: STDERR: 
	I0816 10:33:36.701964    5062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2 +20000M
	I0816 10:33:36.710271    5062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:33:36.710289    5062 main.go:141] libmachine: STDERR: 
	I0816 10:33:36.710307    5062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:36.710315    5062 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:33:36.710324    5062 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:33:36.710350    5062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e0:2b:5a:01:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:36.711983    5062 main.go:141] libmachine: STDOUT: 
	I0816 10:33:36.711998    5062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:33:36.712017    5062 client.go:171] duration metric: took 268.706541ms to LocalClient.Create
	I0816 10:33:38.714073    5062 start.go:128] duration metric: took 2.293989083s to createHost
	I0816 10:33:38.714111    5062 start.go:83] releasing machines lock for "kubernetes-upgrade-629000", held for 2.294065917s
	W0816 10:33:38.714127    5062 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:33:38.724717    5062 out.go:177] * Deleting "kubernetes-upgrade-629000" in qemu2 ...
	W0816 10:33:38.732357    5062 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:33:38.732369    5062 start.go:729] Will try again in 5 seconds ...
	I0816 10:33:43.734534    5062 start.go:360] acquireMachinesLock for kubernetes-upgrade-629000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:33:43.735107    5062 start.go:364] duration metric: took 472.917µs to acquireMachinesLock for "kubernetes-upgrade-629000"
	I0816 10:33:43.735181    5062 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:33:43.735446    5062 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:33:43.745033    5062 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:33:43.790604    5062 start.go:159] libmachine.API.Create for "kubernetes-upgrade-629000" (driver="qemu2")
	I0816 10:33:43.790660    5062 client.go:168] LocalClient.Create starting
	I0816 10:33:43.790786    5062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:33:43.790865    5062 main.go:141] libmachine: Decoding PEM data...
	I0816 10:33:43.790883    5062 main.go:141] libmachine: Parsing certificate...
	I0816 10:33:43.790940    5062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:33:43.790987    5062 main.go:141] libmachine: Decoding PEM data...
	I0816 10:33:43.791001    5062 main.go:141] libmachine: Parsing certificate...
	I0816 10:33:43.791534    5062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:33:43.932837    5062 main.go:141] libmachine: Creating SSH key...
	I0816 10:33:43.998930    5062 main.go:141] libmachine: Creating Disk image...
	I0816 10:33:43.998935    5062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:33:43.999125    5062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:44.008609    5062 main.go:141] libmachine: STDOUT: 
	I0816 10:33:44.008628    5062 main.go:141] libmachine: STDERR: 
	I0816 10:33:44.008698    5062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2 +20000M
	I0816 10:33:44.016901    5062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:33:44.016920    5062 main.go:141] libmachine: STDERR: 
	I0816 10:33:44.016934    5062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:44.016938    5062 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:33:44.016948    5062 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:33:44.016972    5062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:6d:ed:16:39:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:44.018642    5062 main.go:141] libmachine: STDOUT: 
	I0816 10:33:44.018656    5062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:33:44.018669    5062 client.go:171] duration metric: took 228.008ms to LocalClient.Create
	I0816 10:33:46.020843    5062 start.go:128] duration metric: took 2.28539475s to createHost
	I0816 10:33:46.020949    5062 start.go:83] releasing machines lock for "kubernetes-upgrade-629000", held for 2.285866667s
	W0816 10:33:46.021268    5062 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:33:46.034083    5062 out.go:201] 
	W0816 10:33:46.037165    5062 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:33:46.037189    5062 out.go:270] * 
	* 
	W0816 10:33:46.039754    5062 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:33:46.052827    5062 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-629000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-629000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-629000: (3.352482083s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-629000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-629000 status --format={{.Host}}: exit status 7 (30.17575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-629000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-629000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.168166834s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-629000" primary control-plane node in "kubernetes-upgrade-629000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:33:49.480084    5099 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:33:49.480222    5099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:33:49.480225    5099 out.go:358] Setting ErrFile to fd 2...
	I0816 10:33:49.480227    5099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:33:49.480369    5099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:33:49.481474    5099 out.go:352] Setting JSON to false
	I0816 10:33:49.497892    5099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3792,"bootTime":1723825837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:33:49.497967    5099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:33:49.502800    5099 out.go:177] * [kubernetes-upgrade-629000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:33:49.510800    5099 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:33:49.510850    5099 notify.go:220] Checking for updates...
	I0816 10:33:49.517783    5099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:33:49.520729    5099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:33:49.523806    5099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:33:49.526815    5099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:33:49.529720    5099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:33:49.533067    5099 config.go:182] Loaded profile config "kubernetes-upgrade-629000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0816 10:33:49.533347    5099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:33:49.537792    5099 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:33:49.544758    5099 start.go:297] selected driver: qemu2
	I0816 10:33:49.544765    5099 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-629000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:33:49.544814    5099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:33:49.547224    5099 cni.go:84] Creating CNI manager for ""
	I0816 10:33:49.547241    5099 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:33:49.547261    5099 start.go:340] cluster config:
	{Name:kubernetes-upgrade-629000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-629000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:33:49.550794    5099 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:33:49.556740    5099 out.go:177] * Starting "kubernetes-upgrade-629000" primary control-plane node in "kubernetes-upgrade-629000" cluster
	I0816 10:33:49.560766    5099 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:33:49.560779    5099 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:33:49.560788    5099 cache.go:56] Caching tarball of preloaded images
	I0816 10:33:49.560837    5099 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:33:49.560842    5099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:33:49.560900    5099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/kubernetes-upgrade-629000/config.json ...
	I0816 10:33:49.561382    5099 start.go:360] acquireMachinesLock for kubernetes-upgrade-629000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:33:49.561408    5099 start.go:364] duration metric: took 19.667µs to acquireMachinesLock for "kubernetes-upgrade-629000"
	I0816 10:33:49.561417    5099 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:33:49.561423    5099 fix.go:54] fixHost starting: 
	I0816 10:33:49.561530    5099 fix.go:112] recreateIfNeeded on kubernetes-upgrade-629000: state=Stopped err=<nil>
	W0816 10:33:49.561539    5099 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:33:49.569748    5099 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-629000" ...
	I0816 10:33:49.573599    5099 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:33:49.573636    5099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:6d:ed:16:39:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:49.575578    5099 main.go:141] libmachine: STDOUT: 
	I0816 10:33:49.575597    5099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:33:49.575624    5099 fix.go:56] duration metric: took 14.20275ms for fixHost
	I0816 10:33:49.575627    5099 start.go:83] releasing machines lock for "kubernetes-upgrade-629000", held for 14.216042ms
	W0816 10:33:49.575639    5099 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:33:49.575666    5099 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:33:49.575670    5099 start.go:729] Will try again in 5 seconds ...
	I0816 10:33:54.577624    5099 start.go:360] acquireMachinesLock for kubernetes-upgrade-629000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:33:54.577717    5099 start.go:364] duration metric: took 79.916µs to acquireMachinesLock for "kubernetes-upgrade-629000"
	I0816 10:33:54.577733    5099 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:33:54.577737    5099 fix.go:54] fixHost starting: 
	I0816 10:33:54.577874    5099 fix.go:112] recreateIfNeeded on kubernetes-upgrade-629000: state=Stopped err=<nil>
	W0816 10:33:54.577884    5099 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:33:54.582084    5099 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-629000" ...
	I0816 10:33:54.586058    5099 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:33:54.586098    5099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:6d:ed:16:39:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubernetes-upgrade-629000/disk.qcow2
	I0816 10:33:54.588285    5099 main.go:141] libmachine: STDOUT: 
	I0816 10:33:54.588310    5099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:33:54.588330    5099 fix.go:56] duration metric: took 10.593125ms for fixHost
	I0816 10:33:54.588334    5099 start.go:83] releasing machines lock for "kubernetes-upgrade-629000", held for 10.61225ms
	W0816 10:33:54.588385    5099 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:33:54.597020    5099 out.go:201] 
	W0816 10:33:54.601114    5099 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:33:54.601122    5099 out.go:270] * 
	* 
	W0816 10:33:54.601619    5099 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:33:54.612026    5099 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-629000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-629000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-629000 version --output=json: exit status 1 (30.549375ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-629000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-16 10:33:54.651297 -0700 PDT m=+2783.899796376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-629000 -n kubernetes-upgrade-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-629000 -n kubernetes-upgrade-629000: exit status 7 (33.223167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-629000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-629000
--- FAIL: TestKubernetesUpgrade (18.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.34s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19461
- KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3260760294/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.45s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19461
- KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current291219857/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2274508634 start -p stopped-upgrade-403000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2274508634 start -p stopped-upgrade-403000 --memory=2200 --vm-driver=qemu2 : (40.073773625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2274508634 -p stopped-upgrade-403000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2274508634 -p stopped-upgrade-403000 stop: (12.105414375s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-403000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0816 10:36:19.018654    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:36:28.976953    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:38:25.872635    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-403000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.821181083s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-403000" primary control-plane node in "stopped-upgrade-403000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-403000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:34:47.945763    5136 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:34:47.945908    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:34:47.945912    5136 out.go:358] Setting ErrFile to fd 2...
	I0816 10:34:47.945915    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:34:47.946045    5136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:34:47.947176    5136 out.go:352] Setting JSON to false
	I0816 10:34:47.964781    5136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3850,"bootTime":1723825837,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:34:47.964853    5136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:34:47.969694    5136 out.go:177] * [stopped-upgrade-403000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:34:47.977619    5136 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:34:47.977656    5136 notify.go:220] Checking for updates...
	I0816 10:34:47.984627    5136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:34:47.987576    5136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:34:47.990624    5136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:34:47.993639    5136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:34:47.996762    5136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:34:47.999877    5136 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:34:48.003631    5136 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 10:34:48.006584    5136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:34:48.010602    5136 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:34:48.016619    5136 start.go:297] selected driver: qemu2
	I0816 10:34:48.016626    5136 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:34:48.016694    5136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:34:48.019222    5136 cni.go:84] Creating CNI manager for ""
	I0816 10:34:48.019242    5136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:34:48.019264    5136 start.go:340] cluster config:
	{Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:34:48.019338    5136 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:34:48.026632    5136 out.go:177] * Starting "stopped-upgrade-403000" primary control-plane node in "stopped-upgrade-403000" cluster
	I0816 10:34:48.030631    5136 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 10:34:48.030647    5136 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0816 10:34:48.030657    5136 cache.go:56] Caching tarball of preloaded images
	I0816 10:34:48.030730    5136 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:34:48.030736    5136 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0816 10:34:48.030798    5136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/config.json ...
	I0816 10:34:48.031236    5136 start.go:360] acquireMachinesLock for stopped-upgrade-403000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:34:48.031269    5136 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "stopped-upgrade-403000"
	I0816 10:34:48.031278    5136 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:34:48.031284    5136 fix.go:54] fixHost starting: 
	I0816 10:34:48.031388    5136 fix.go:112] recreateIfNeeded on stopped-upgrade-403000: state=Stopped err=<nil>
	W0816 10:34:48.031396    5136 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:34:48.035719    5136 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-403000" ...
	I0816 10:34:48.043619    5136 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:34:48.043687    5136 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50463-:22,hostfwd=tcp::50464-:2376,hostname=stopped-upgrade-403000 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/disk.qcow2
	I0816 10:34:48.090633    5136 main.go:141] libmachine: STDOUT: 
	I0816 10:34:48.090668    5136 main.go:141] libmachine: STDERR: 
	I0816 10:34:48.090673    5136 main.go:141] libmachine: Waiting for VM to start (ssh -p 50463 docker@127.0.0.1)...
	I0816 10:35:08.135616    5136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/config.json ...
	I0816 10:35:08.136486    5136 machine.go:93] provisionDockerMachine start ...
	I0816 10:35:08.136585    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.136940    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.136948    5136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 10:35:08.227595    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0816 10:35:08.227629    5136 buildroot.go:166] provisioning hostname "stopped-upgrade-403000"
	I0816 10:35:08.227744    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.227953    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.227963    5136 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-403000 && echo "stopped-upgrade-403000" | sudo tee /etc/hostname
	I0816 10:35:08.309692    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-403000
	
	I0816 10:35:08.309777    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.309957    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.309972    5136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-403000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-403000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-403000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 10:35:08.386051    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 10:35:08.386065    5136 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19461-1189/.minikube CaCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19461-1189/.minikube}
	I0816 10:35:08.386081    5136 buildroot.go:174] setting up certificates
	I0816 10:35:08.386096    5136 provision.go:84] configureAuth start
	I0816 10:35:08.386103    5136 provision.go:143] copyHostCerts
	I0816 10:35:08.386189    5136 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem, removing ...
	I0816 10:35:08.386195    5136 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem
	I0816 10:35:08.386305    5136 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.pem (1082 bytes)
	I0816 10:35:08.386492    5136 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem, removing ...
	I0816 10:35:08.386496    5136 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem
	I0816 10:35:08.386556    5136 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/cert.pem (1123 bytes)
	I0816 10:35:08.386673    5136 exec_runner.go:144] found /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem, removing ...
	I0816 10:35:08.386676    5136 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem
	I0816 10:35:08.386727    5136 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19461-1189/.minikube/key.pem (1679 bytes)
	I0816 10:35:08.386823    5136 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-403000 san=[127.0.0.1 localhost minikube stopped-upgrade-403000]
	I0816 10:35:08.473164    5136 provision.go:177] copyRemoteCerts
	I0816 10:35:08.473191    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 10:35:08.473200    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:35:08.510762    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0816 10:35:08.517573    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0816 10:35:08.524611    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 10:35:08.531934    5136 provision.go:87] duration metric: took 145.835417ms to configureAuth
	I0816 10:35:08.531944    5136 buildroot.go:189] setting minikube options for container-runtime
	I0816 10:35:08.532079    5136 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:35:08.532115    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.532203    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.532208    5136 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0816 10:35:08.602067    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0816 10:35:08.602076    5136 buildroot.go:70] root file system type: tmpfs
	I0816 10:35:08.602128    5136 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0816 10:35:08.602181    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.602297    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.602333    5136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0816 10:35:08.674591    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0816 10:35:08.674646    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:08.674765    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:08.674773    5136 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0816 10:35:09.058118    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0816 10:35:09.058132    5136 machine.go:96] duration metric: took 921.658125ms to provisionDockerMachine
	I0816 10:35:09.058139    5136 start.go:293] postStartSetup for "stopped-upgrade-403000" (driver="qemu2")
	I0816 10:35:09.058146    5136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 10:35:09.058213    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 10:35:09.058222    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:35:09.095798    5136 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 10:35:09.097185    5136 info.go:137] Remote host: Buildroot 2021.02.12
	I0816 10:35:09.097193    5136 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1189/.minikube/addons for local assets ...
	I0816 10:35:09.097275    5136 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19461-1189/.minikube/files for local assets ...
	I0816 10:35:09.097395    5136 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem -> 20542.pem in /etc/ssl/certs
	I0816 10:35:09.097522    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 10:35:09.100423    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem --> /etc/ssl/certs/20542.pem (1708 bytes)
	I0816 10:35:09.107857    5136 start.go:296] duration metric: took 49.713583ms for postStartSetup
	I0816 10:35:09.107875    5136 fix.go:56] duration metric: took 21.077043583s for fixHost
	I0816 10:35:09.107908    5136 main.go:141] libmachine: Using SSH client type: native
	I0816 10:35:09.108011    5136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1004845a0] 0x100486e00 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0816 10:35:09.108016    5136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0816 10:35:09.178400    5136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723829709.337763504
	
	I0816 10:35:09.178407    5136 fix.go:216] guest clock: 1723829709.337763504
	I0816 10:35:09.178411    5136 fix.go:229] Guest: 2024-08-16 10:35:09.337763504 -0700 PDT Remote: 2024-08-16 10:35:09.107877 -0700 PDT m=+21.187146167 (delta=229.886504ms)
	I0816 10:35:09.178422    5136 fix.go:200] guest clock delta is within tolerance: 229.886504ms
	I0816 10:35:09.178425    5136 start.go:83] releasing machines lock for "stopped-upgrade-403000", held for 21.147603417s
	I0816 10:35:09.178482    5136 ssh_runner.go:195] Run: cat /version.json
	I0816 10:35:09.178491    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:35:09.178495    5136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 10:35:09.178511    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	W0816 10:35:09.179048    5136 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50463: connect: connection refused
	I0816 10:35:09.179073    5136 retry.go:31] will retry after 239.326818ms: dial tcp [::1]:50463: connect: connection refused
	W0816 10:35:09.214211    5136 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0816 10:35:09.214257    5136 ssh_runner.go:195] Run: systemctl --version
	I0816 10:35:09.216074    5136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0816 10:35:09.217772    5136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0816 10:35:09.217797    5136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0816 10:35:09.220700    5136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0816 10:35:09.225628    5136 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0816 10:35:09.225642    5136 start.go:495] detecting cgroup driver to use...
	I0816 10:35:09.225711    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:35:09.232663    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0816 10:35:09.236039    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 10:35:09.239552    5136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 10:35:09.239584    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 10:35:09.242597    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:35:09.245405    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 10:35:09.248646    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 10:35:09.252123    5136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 10:35:09.255615    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 10:35:09.258509    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 10:35:09.261279    5136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 10:35:09.264427    5136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 10:35:09.267328    5136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 10:35:09.269980    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:09.333896    5136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 10:35:09.342336    5136 start.go:495] detecting cgroup driver to use...
	I0816 10:35:09.342407    5136 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0816 10:35:09.350021    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:35:09.355402    5136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0816 10:35:09.364554    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0816 10:35:09.369534    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:35:09.374522    5136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 10:35:09.426844    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 10:35:09.432061    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 10:35:09.437582    5136 ssh_runner.go:195] Run: which cri-dockerd
	I0816 10:35:09.439189    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0816 10:35:09.441768    5136 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0816 10:35:09.446553    5136 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0816 10:35:09.514887    5136 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0816 10:35:09.700833    5136 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0816 10:35:09.700914    5136 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0816 10:35:09.707176    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:09.783752    5136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:35:10.934982    5136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.151235792s)
	I0816 10:35:10.935052    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0816 10:35:10.939554    5136 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0816 10:35:10.945892    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:35:10.951005    5136 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0816 10:35:11.030883    5136 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0816 10:35:11.105527    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:11.186520    5136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0816 10:35:11.192444    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0816 10:35:11.197419    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:11.273787    5136 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0816 10:35:11.311150    5136 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0816 10:35:11.311224    5136 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0816 10:35:11.313979    5136 start.go:563] Will wait 60s for crictl version
	I0816 10:35:11.314037    5136 ssh_runner.go:195] Run: which crictl
	I0816 10:35:11.315471    5136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 10:35:11.329863    5136 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0816 10:35:11.329929    5136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:35:11.345689    5136 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0816 10:35:11.367287    5136 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0816 10:35:11.367415    5136 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0816 10:35:11.368748    5136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:35:11.372786    5136 kubeadm.go:883] updating cluster {Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0816 10:35:11.372834    5136 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0816 10:35:11.372870    5136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:35:11.383598    5136 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:35:11.383606    5136 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 10:35:11.383646    5136 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:35:11.386963    5136 ssh_runner.go:195] Run: which lz4
	I0816 10:35:11.388295    5136 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0816 10:35:11.389679    5136 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0816 10:35:11.389690    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0816 10:35:12.365896    5136 docker.go:649] duration metric: took 977.648292ms to copy over tarball
	I0816 10:35:12.365953    5136 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 10:35:13.534088    5136 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168139s)
	I0816 10:35:13.534103    5136 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0816 10:35:13.549376    5136 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0816 10:35:13.552531    5136 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0816 10:35:13.557254    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:13.633610    5136 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0816 10:35:15.513867    5136 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.8802815s)
	I0816 10:35:15.513955    5136 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0816 10:35:15.527255    5136 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0816 10:35:15.527264    5136 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0816 10:35:15.527269    5136 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0816 10:35:15.531146    5136 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:15.533516    5136 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:15.534462    5136 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:15.534767    5136 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:15.537042    5136 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:15.538816    5136 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0816 10:35:15.538976    5136 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:15.539371    5136 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:15.540797    5136 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:15.541356    5136 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0816 10:35:15.541365    5136 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:15.541477    5136 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:15.543063    5136 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:15.543993    5136 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:15.544063    5136 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:15.544937    5136 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.021883    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:16.025589    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:16.025868    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:16.025888    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0816 10:35:16.034285    5136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0816 10:35:16.034314    5136 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:16.034366    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0816 10:35:16.058080    5136 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0816 10:35:16.058105    5136 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:16.058161    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0816 10:35:16.058186    5136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0816 10:35:16.058198    5136 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:16.058225    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0816 10:35:16.058213    5136 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0816 10:35:16.058250    5136 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0816 10:35:16.058271    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0816 10:35:16.061372    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0816 10:35:16.069779    5136 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0816 10:35:16.069893    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:16.077232    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0816 10:35:16.077279    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0816 10:35:16.077377    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0816 10:35:16.078493    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:16.084017    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0816 10:35:16.084146    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0816 10:35:16.093172    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0816 10:35:16.093200    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0816 10:35:16.093254    5136 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0816 10:35:16.093274    5136 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:16.093314    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0816 10:35:16.094433    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.096449    5136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0816 10:35:16.096465    5136 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:16.096494    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0816 10:35:16.098223    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0816 10:35:16.098247    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0816 10:35:16.128641    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0816 10:35:16.128767    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0816 10:35:16.130610    5136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0816 10:35:16.130641    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0816 10:35:16.130648    5136 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.130784    5136 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0816 10:35:16.142166    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0816 10:35:16.142201    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0816 10:35:16.142221    5136 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0816 10:35:16.142227    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0816 10:35:16.161651    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W0816 10:35:16.163432    5136 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0816 10:35:16.163547    5136 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:16.211414    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0816 10:35:16.222338    5136 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0816 10:35:16.222370    5136 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:16.222437    5136 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:35:16.256466    5136 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0816 10:35:16.256492    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0816 10:35:16.279592    5136 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 10:35:16.279731    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0816 10:35:16.354784    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0816 10:35:16.354827    5136 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 10:35:16.354853    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0816 10:35:16.423920    5136 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 10:35:16.423938    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0816 10:35:16.749534    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 10:35:16.749560    5136 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0816 10:35:16.749567    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0816 10:35:16.903236    5136 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0816 10:35:16.903274    5136 cache_images.go:92] duration metric: took 1.376027458s to LoadCachedImages
	W0816 10:35:16.903321    5136 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0816 10:35:16.903335    5136 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0816 10:35:16.903386    5136 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-403000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 10:35:16.903451    5136 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0816 10:35:16.917404    5136 cni.go:84] Creating CNI manager for ""
	I0816 10:35:16.917416    5136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:35:16.917421    5136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 10:35:16.917432    5136 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-403000 NodeName:stopped-upgrade-403000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 10:35:16.917504    5136 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-403000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 10:35:16.917560    5136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0816 10:35:16.920653    5136 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 10:35:16.920678    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 10:35:16.923913    5136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0816 10:35:16.928941    5136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 10:35:16.933971    5136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0816 10:35:16.939355    5136 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0816 10:35:16.940666    5136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 10:35:16.944531    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:35:17.006607    5136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:35:17.011858    5136 certs.go:68] Setting up /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000 for IP: 10.0.2.15
	I0816 10:35:17.011864    5136 certs.go:194] generating shared ca certs ...
	I0816 10:35:17.011872    5136 certs.go:226] acquiring lock for ca certs: {Name:mkd0f48b500cbb75fb3e9a7c625fdb17e399313f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.012028    5136 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.key
	I0816 10:35:17.012079    5136 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.key
	I0816 10:35:17.012084    5136 certs.go:256] generating profile certs ...
	I0816 10:35:17.012157    5136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.key
	I0816 10:35:17.012174    5136 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f
	I0816 10:35:17.012185    5136 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0816 10:35:17.135094    5136 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f ...
	I0816 10:35:17.135106    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f: {Name:mk27c02f3c1b53070f9e389840434de4c108251c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.136522    5136 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f ...
	I0816 10:35:17.136536    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f: {Name:mkba004d73043a9e35c85af6ee5e0accff6107ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.136698    5136 certs.go:381] copying /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt.feb6d76f -> /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt
	I0816 10:35:17.136851    5136 certs.go:385] copying /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key.feb6d76f -> /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key
	I0816 10:35:17.137010    5136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/proxy-client.key
	I0816 10:35:17.137153    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054.pem (1338 bytes)
	W0816 10:35:17.137184    5136 certs.go:480] ignoring /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054_empty.pem, impossibly tiny 0 bytes
	I0816 10:35:17.137191    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca-key.pem (1679 bytes)
	I0816 10:35:17.137211    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem (1082 bytes)
	I0816 10:35:17.137235    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem (1123 bytes)
	I0816 10:35:17.137255    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/key.pem (1679 bytes)
	I0816 10:35:17.137297    5136 certs.go:484] found cert: /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem (1708 bytes)
	I0816 10:35:17.137672    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 10:35:17.144521    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 10:35:17.151488    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 10:35:17.158494    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0816 10:35:17.165109    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 10:35:17.172165    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 10:35:17.179630    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 10:35:17.186847    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 10:35:17.193579    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/ssl/certs/20542.pem --> /usr/share/ca-certificates/20542.pem (1708 bytes)
	I0816 10:35:17.200461    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 10:35:17.207664    5136 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/2054.pem --> /usr/share/ca-certificates/2054.pem (1338 bytes)
	I0816 10:35:17.214452    5136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 10:35:17.219361    5136 ssh_runner.go:195] Run: openssl version
	I0816 10:35:17.221234    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 10:35:17.224536    5136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:35:17.225949    5136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 16:48 /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:35:17.225971    5136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 10:35:17.227670    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 10:35:17.230813    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2054.pem && ln -fs /usr/share/ca-certificates/2054.pem /etc/ssl/certs/2054.pem"
	I0816 10:35:17.233642    5136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2054.pem
	I0816 10:35:17.235037    5136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 16:55 /usr/share/ca-certificates/2054.pem
	I0816 10:35:17.235056    5136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2054.pem
	I0816 10:35:17.236926    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2054.pem /etc/ssl/certs/51391683.0"
	I0816 10:35:17.240386    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20542.pem && ln -fs /usr/share/ca-certificates/20542.pem /etc/ssl/certs/20542.pem"
	I0816 10:35:17.243808    5136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20542.pem
	I0816 10:35:17.245229    5136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 16:55 /usr/share/ca-certificates/20542.pem
	I0816 10:35:17.245253    5136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20542.pem
	I0816 10:35:17.246952    5136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20542.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 10:35:17.249809    5136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 10:35:17.251181    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 10:35:17.253166    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 10:35:17.255056    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 10:35:17.257778    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 10:35:17.259518    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 10:35:17.261482    5136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 10:35:17.263281    5136 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-403000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0816 10:35:17.263354    5136 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:35:17.273574    5136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 10:35:17.276819    5136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 10:35:17.276826    5136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 10:35:17.276852    5136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 10:35:17.280212    5136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 10:35:17.280510    5136 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-403000" does not appear in /Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:35:17.280611    5136 kubeconfig.go:62] /Users/jenkins/minikube-integration/19461-1189/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-403000" cluster setting kubeconfig missing "stopped-upgrade-403000" context setting]
	I0816 10:35:17.280797    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/kubeconfig: {Name:mk2e4f2b039616ddb85ed20d74e703a928518229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:35:17.281211    5136 kapi.go:59] client config for stopped-upgrade-403000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a3d610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:35:17.281539    5136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 10:35:17.284428    5136 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-403000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0816 10:35:17.284436    5136 kubeadm.go:1160] stopping kube-system containers ...
	I0816 10:35:17.284475    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0816 10:35:17.294968    5136 docker.go:483] Stopping containers: [9533d81142ad 5db973a16a19 c96cfddd42cc 44ae055ab8e7 197bec61c229 b623ce8dc29a a0e70d78570e dee52b6f306c]
	I0816 10:35:17.295033    5136 ssh_runner.go:195] Run: docker stop 9533d81142ad 5db973a16a19 c96cfddd42cc 44ae055ab8e7 197bec61c229 b623ce8dc29a a0e70d78570e dee52b6f306c
	I0816 10:35:17.305287    5136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0816 10:35:17.311096    5136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:35:17.313787    5136 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:35:17.313793    5136 kubeadm.go:157] found existing configuration files:
	
	I0816 10:35:17.313812    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0816 10:35:17.316563    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:35:17.316598    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:35:17.319440    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0816 10:35:17.322063    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:35:17.322089    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:35:17.324661    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0816 10:35:17.327560    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:35:17.327583    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:35:17.330102    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0816 10:35:17.332633    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:35:17.332659    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:35:17.335481    5136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:35:17.338147    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.359893    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.670815    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.806486    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.834907    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 10:35:17.861088    5136 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:35:17.861162    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:35:18.363227    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:35:18.863233    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:35:18.867597    5136 api_server.go:72] duration metric: took 1.006530334s to wait for apiserver process to appear ...
	I0816 10:35:18.867608    5136 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:35:18.867621    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:23.869667    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:23.869712    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:28.869954    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:28.870010    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:33.870368    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:33.870410    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:38.870881    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:38.870917    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:43.871579    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:43.871643    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:48.872365    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:48.872410    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:53.873511    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:53.873533    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:35:58.875144    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:35:58.875279    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:03.877438    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:03.877485    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:08.879728    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:08.879797    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:13.882049    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:13.882071    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:18.884141    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:18.884296    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:18.895316    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:18.895398    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:18.905890    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:18.905969    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:18.916129    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:18.916193    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:18.930094    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:18.930169    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:18.940515    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:18.940582    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:18.951345    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:18.951412    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:18.961940    5136 logs.go:276] 0 containers: []
	W0816 10:36:18.961952    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:18.962015    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:18.972086    5136 logs.go:276] 0 containers: []
	W0816 10:36:18.972098    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:18.972105    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:18.972110    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:18.985818    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:18.985832    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:18.996814    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:18.996826    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:19.008781    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:19.008793    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:19.022814    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:19.022827    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:19.035517    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:19.035533    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:19.074659    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:19.074672    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:19.152463    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:19.152478    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:19.195952    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:19.195969    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:19.213450    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:19.213465    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:19.225323    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:19.225333    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:19.249379    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:19.249389    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:19.253370    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:19.253379    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:19.267480    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:19.267496    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:19.282532    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:19.282543    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:21.800419    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:26.802742    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:26.802911    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:26.817089    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:26.817162    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:26.828850    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:26.828926    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:26.840934    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:26.841025    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:26.852261    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:26.852335    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:26.863078    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:26.863145    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:26.876691    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:26.876748    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:26.887556    5136 logs.go:276] 0 containers: []
	W0816 10:36:26.887570    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:26.887628    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:26.897732    5136 logs.go:276] 0 containers: []
	W0816 10:36:26.897744    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:26.897751    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:26.897756    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:26.909758    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:26.909769    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:26.922380    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:26.922391    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:26.935572    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:26.935586    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:26.977768    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:26.977782    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:26.982111    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:26.982119    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:26.997092    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:26.997102    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:27.012039    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:27.012049    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:27.023795    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:27.023808    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:27.060743    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:27.060754    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:27.078509    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:27.078521    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:27.104713    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:27.104729    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:27.121425    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:27.121437    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:27.159609    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:27.159620    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:27.171553    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:27.171564    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:29.687405    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:34.689683    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:34.689914    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:34.713249    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:34.713345    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:34.729575    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:34.729661    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:34.742462    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:34.742536    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:34.753764    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:34.753835    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:34.763945    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:34.764013    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:34.774255    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:34.774330    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:34.784617    5136 logs.go:276] 0 containers: []
	W0816 10:36:34.784629    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:34.784686    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:34.795039    5136 logs.go:276] 0 containers: []
	W0816 10:36:34.795051    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:34.795058    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:34.795064    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:34.806342    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:34.806355    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:34.819454    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:34.819471    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:34.831486    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:34.831499    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:34.845838    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:34.845851    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:34.884682    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:34.884696    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:34.900468    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:34.900483    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:34.913931    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:34.913945    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:34.939441    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:34.939450    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:34.977989    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:34.978002    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:34.982225    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:34.982231    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:35.001145    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:35.001156    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:35.037853    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:35.037867    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:35.052611    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:35.052627    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:35.064467    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:35.064478    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:37.585648    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:42.588085    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:42.588484    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:42.623458    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:42.623580    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:42.642716    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:42.642812    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:42.665553    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:42.665628    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:42.677761    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:42.677829    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:42.688097    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:42.688165    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:42.699195    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:42.699263    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:42.710411    5136 logs.go:276] 0 containers: []
	W0816 10:36:42.710422    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:42.710478    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:42.730192    5136 logs.go:276] 0 containers: []
	W0816 10:36:42.730203    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:42.730211    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:42.730216    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:42.748082    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:42.748095    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:42.764596    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:42.764608    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:42.779298    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:42.779310    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:42.816325    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:42.816339    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:42.833818    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:42.833836    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:42.845474    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:42.845486    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:42.859582    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:42.859592    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:42.898197    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:42.898208    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:42.902268    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:42.902275    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:42.915982    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:42.915994    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:42.930782    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:42.930796    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:42.966402    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:42.966413    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:42.982906    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:42.982915    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:42.994271    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:42.994283    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:45.521376    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:50.523726    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:50.523992    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:50.555053    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:50.555182    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:50.574030    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:50.574125    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:50.588244    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:50.588320    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:50.600630    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:50.600706    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:50.611379    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:50.611444    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:50.622219    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:50.622286    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:50.631949    5136 logs.go:276] 0 containers: []
	W0816 10:36:50.631962    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:50.632019    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:50.642781    5136 logs.go:276] 0 containers: []
	W0816 10:36:50.642793    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:50.642800    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:50.642805    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:50.664324    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:50.664335    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:50.689294    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:50.689305    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:50.728072    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:50.728083    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:50.742979    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:50.742989    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:50.760194    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:50.760207    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:50.764811    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:50.764819    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:50.779171    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:50.779186    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:50.815109    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:50.815123    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:50.831052    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:50.831069    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:50.846144    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:50.846158    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:50.858136    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:50.858152    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:50.869893    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:50.869902    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:36:50.883722    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:50.883733    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:50.920547    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:50.920558    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:53.436352    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:36:58.437398    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:36:58.437568    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:36:58.451289    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:36:58.451378    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:36:58.462213    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:36:58.462282    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:36:58.472468    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:36:58.472542    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:36:58.483318    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:36:58.483388    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:36:58.494292    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:36:58.494356    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:36:58.504694    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:36:58.504762    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:36:58.517868    5136 logs.go:276] 0 containers: []
	W0816 10:36:58.517881    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:36:58.517943    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:36:58.528593    5136 logs.go:276] 0 containers: []
	W0816 10:36:58.528611    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:36:58.528619    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:36:58.528624    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:36:58.554667    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:36:58.554678    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:36:58.559155    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:36:58.559164    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:36:58.572908    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:36:58.572921    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:36:58.588645    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:36:58.588656    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:36:58.606584    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:36:58.606594    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:36:58.642875    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:36:58.642883    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:36:58.679524    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:36:58.679535    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:36:58.698310    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:36:58.698321    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:36:58.710755    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:36:58.710764    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:36:58.723506    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:36:58.723518    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:36:58.760988    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:36:58.761000    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:36:58.775282    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:36:58.775293    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:36:58.786811    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:36:58.786821    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:36:58.798370    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:36:58.798384    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:01.320347    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:06.322562    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:06.322939    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:06.360795    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:06.360939    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:06.381318    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:06.381408    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:06.396765    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:06.396845    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:06.409418    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:06.409489    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:06.420535    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:06.420609    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:06.431604    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:06.431676    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:06.446251    5136 logs.go:276] 0 containers: []
	W0816 10:37:06.446262    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:06.446324    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:06.465888    5136 logs.go:276] 0 containers: []
	W0816 10:37:06.465901    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:06.465908    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:06.465916    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:06.503233    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:06.503243    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:06.516047    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:06.516058    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:06.550226    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:06.550238    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:06.566288    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:06.566300    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:06.584505    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:06.584517    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:06.589100    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:06.589108    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:06.627677    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:06.627689    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:06.643228    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:06.643243    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:06.660526    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:06.660537    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:06.674400    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:06.674413    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:06.688321    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:06.688330    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:06.714919    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:06.714932    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:06.729541    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:06.729554    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:06.744910    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:06.744920    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:09.259710    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:14.261848    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:14.262035    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:14.280179    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:14.280274    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:14.293640    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:14.293723    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:14.305903    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:14.305967    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:14.316710    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:14.316771    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:14.327301    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:14.327369    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:14.338000    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:14.338061    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:14.348804    5136 logs.go:276] 0 containers: []
	W0816 10:37:14.348815    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:14.348867    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:14.359141    5136 logs.go:276] 0 containers: []
	W0816 10:37:14.359151    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:14.359159    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:14.359164    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:14.393610    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:14.393622    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:14.431246    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:14.431257    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:14.446932    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:14.446945    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:14.465726    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:14.465739    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:14.470254    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:14.470264    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:14.484874    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:14.484884    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:14.510150    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:14.510157    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:14.521524    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:14.521536    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:14.534589    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:14.534601    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:14.548889    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:14.548902    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:14.560512    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:14.560523    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:14.597728    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:14.597736    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:14.612086    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:14.612100    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:14.623873    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:14.623888    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:17.140182    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:22.142338    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:22.142678    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:22.174851    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:22.174979    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:22.193386    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:22.193481    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:22.207645    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:22.207720    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:22.219784    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:22.219858    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:22.230958    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:22.231028    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:22.241973    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:22.242045    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:22.252217    5136 logs.go:276] 0 containers: []
	W0816 10:37:22.252229    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:22.252284    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:22.262869    5136 logs.go:276] 0 containers: []
	W0816 10:37:22.262881    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:22.262889    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:22.262894    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:22.282011    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:22.282021    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:22.300184    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:22.300196    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:22.324312    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:22.324321    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:22.338817    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:22.338832    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:22.360511    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:22.360524    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:22.394264    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:22.394275    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:22.409977    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:22.409987    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:22.421120    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:22.421133    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:22.435007    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:22.435019    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:22.474624    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:22.474634    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:22.513512    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:22.513522    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:22.525277    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:22.525288    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:22.537795    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:22.537809    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:22.549701    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:22.549714    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:25.055797    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:30.057977    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:30.058166    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:30.077368    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:30.077461    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:30.091326    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:30.091402    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:30.103227    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:30.103301    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:30.118203    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:30.118302    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:30.128773    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:30.128844    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:30.139888    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:30.139963    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:30.150688    5136 logs.go:276] 0 containers: []
	W0816 10:37:30.150698    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:30.150761    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:30.165630    5136 logs.go:276] 0 containers: []
	W0816 10:37:30.165648    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:30.165657    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:30.165663    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:30.180015    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:30.180025    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:30.195856    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:30.195866    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:30.219923    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:30.219933    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:30.232016    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:30.232032    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:30.250827    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:30.250837    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:30.287889    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:30.287900    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:30.291730    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:30.291739    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:30.337713    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:30.337725    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:30.351671    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:30.351684    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:30.363325    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:30.363336    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:30.377715    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:30.377724    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:30.415690    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:30.415704    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:30.429562    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:30.429572    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:30.450992    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:30.451006    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:32.964524    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:37.966682    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:37.966916    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:37.994590    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:37.994713    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:38.011622    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:38.011702    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:38.026818    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:38.026907    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:38.038220    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:38.038292    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:38.049175    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:38.049255    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:38.060178    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:38.060251    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:38.070248    5136 logs.go:276] 0 containers: []
	W0816 10:37:38.070259    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:38.070317    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:38.080283    5136 logs.go:276] 0 containers: []
	W0816 10:37:38.080295    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:38.080303    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:38.080338    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:38.095128    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:38.095138    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:38.107412    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:38.107423    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:38.126190    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:38.126200    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:38.140058    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:38.140071    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:38.163716    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:38.163724    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:38.202030    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:38.202046    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:38.220276    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:38.220285    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:38.234227    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:38.234242    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:38.246063    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:38.246073    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:38.286162    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:38.286182    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:38.290849    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:38.290856    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:38.302057    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:38.302072    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:38.340305    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:38.340313    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:38.351774    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:38.351789    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:40.867449    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:45.867795    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:45.867976    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:45.885513    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:45.885618    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:45.899092    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:45.899162    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:45.910430    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:45.910499    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:45.920999    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:45.921074    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:45.931575    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:45.931638    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:45.942195    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:45.942256    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:45.952922    5136 logs.go:276] 0 containers: []
	W0816 10:37:45.952934    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:45.952991    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:45.963472    5136 logs.go:276] 0 containers: []
	W0816 10:37:45.963484    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:45.963495    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:45.963501    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:45.984766    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:45.984777    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:46.026854    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:46.026866    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:46.039507    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:46.039519    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:46.057351    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:46.057360    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:46.071668    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:46.071678    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:46.083479    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:46.083490    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:46.107534    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:46.107543    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:46.119212    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:46.119224    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:46.123912    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:46.123919    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:46.159451    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:46.159466    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:46.174805    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:46.174815    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:46.188876    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:46.188887    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:46.200127    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:46.200137    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:46.237285    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:46.237293    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:48.754599    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:37:53.753204    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:37:53.753749    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:37:53.775996    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:37:53.776082    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:37:53.793563    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:37:53.793640    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:37:53.804686    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:37:53.804755    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:37:53.815464    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:37:53.815537    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:37:53.826314    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:37:53.826376    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:37:53.836826    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:37:53.836896    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:37:53.847245    5136 logs.go:276] 0 containers: []
	W0816 10:37:53.847255    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:37:53.847307    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:37:53.857659    5136 logs.go:276] 0 containers: []
	W0816 10:37:53.857671    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:37:53.857679    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:37:53.857688    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:37:53.897866    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:37:53.897878    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:37:53.911784    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:37:53.911795    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:37:53.927445    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:37:53.927455    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:37:53.939531    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:37:53.939544    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:37:53.943998    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:37:53.944006    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:37:53.981298    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:37:53.981313    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:37:53.995349    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:37:53.995362    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:37:54.007157    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:37:54.007169    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:37:54.031916    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:37:54.031930    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:37:54.071263    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:37:54.071282    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:37:54.085817    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:37:54.085828    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:37:54.118890    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:37:54.118900    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:37:54.133792    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:37:54.133803    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:37:54.146044    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:37:54.146059    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:37:56.660712    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:01.661445    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:01.661669    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:01.684541    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:01.684632    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:01.698698    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:01.698777    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:01.715199    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:01.715267    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:01.726127    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:01.726198    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:01.737330    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:01.737400    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:01.747938    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:01.748005    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:01.758052    5136 logs.go:276] 0 containers: []
	W0816 10:38:01.758065    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:01.758125    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:01.767992    5136 logs.go:276] 0 containers: []
	W0816 10:38:01.768004    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:01.768012    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:01.768017    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:01.772532    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:01.772542    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:01.809943    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:01.809955    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:01.825543    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:01.825553    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:01.843601    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:01.843611    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:01.867773    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:01.867786    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:01.906051    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:01.906059    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:01.917657    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:01.917670    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:01.932025    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:01.932035    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:01.946228    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:01.946242    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:01.957754    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:01.957768    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:01.969801    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:01.969813    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:01.981103    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:01.981114    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:02.018765    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:02.018779    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:02.033202    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:02.033213    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:04.549500    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:09.550819    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:09.551069    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:09.574995    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:09.575096    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:09.593326    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:09.593401    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:09.606507    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:09.606578    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:09.617466    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:09.617539    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:09.630756    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:09.630824    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:09.641905    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:09.641974    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:09.654258    5136 logs.go:276] 0 containers: []
	W0816 10:38:09.654271    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:09.654330    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:09.670248    5136 logs.go:276] 0 containers: []
	W0816 10:38:09.670259    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:09.670268    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:09.670274    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:09.681935    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:09.681947    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:09.719414    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:09.719425    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:09.734627    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:09.734643    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:09.748255    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:09.748265    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:09.759404    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:09.759415    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:09.780708    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:09.780722    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:09.794650    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:09.794659    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:09.818567    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:09.818575    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:09.857734    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:09.857745    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:09.894468    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:09.894479    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:09.906132    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:09.906142    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:09.921389    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:09.921401    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:09.925622    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:09.925628    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:09.943664    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:09.943675    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:12.457833    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:17.459434    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:17.459643    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:17.477855    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:17.477952    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:17.492055    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:17.492135    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:17.503669    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:17.503739    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:17.514412    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:17.514478    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:17.525092    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:17.525156    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:17.540493    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:17.540558    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:17.550623    5136 logs.go:276] 0 containers: []
	W0816 10:38:17.550638    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:17.550697    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:17.561027    5136 logs.go:276] 0 containers: []
	W0816 10:38:17.561039    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:17.561046    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:17.561052    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:17.596297    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:17.596312    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:17.610693    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:17.610704    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:17.649284    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:17.649295    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:17.653793    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:17.653802    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:17.693003    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:17.693016    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:17.704314    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:17.704327    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:17.715702    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:17.715712    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:17.727426    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:17.727442    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:17.741511    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:17.741523    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:17.761517    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:17.761530    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:17.779090    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:17.779101    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:17.793186    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:17.793195    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:17.804387    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:17.804398    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:17.818017    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:17.818027    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:20.343544    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:25.345590    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:25.345815    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:25.373847    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:25.373950    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:25.388421    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:25.388501    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:25.401036    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:25.401106    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:25.423185    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:25.423262    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:25.437516    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:25.437581    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:25.452210    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:25.452277    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:25.462772    5136 logs.go:276] 0 containers: []
	W0816 10:38:25.462784    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:25.462841    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:25.473354    5136 logs.go:276] 0 containers: []
	W0816 10:38:25.473370    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:25.473377    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:25.473382    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:25.512488    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:25.512498    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:25.526804    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:25.526817    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:25.538876    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:25.538888    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:25.551127    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:25.551138    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:25.590409    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:25.590433    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:25.595370    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:25.595383    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:25.610430    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:25.610441    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:25.621907    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:25.621920    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:25.659089    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:25.659102    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:25.672411    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:25.672423    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:25.687778    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:25.687788    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:25.705276    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:25.705287    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:25.719049    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:25.719062    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:25.742130    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:25.742139    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:28.258308    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:33.260429    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:33.260696    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:33.297526    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:33.297641    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:33.313704    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:33.313775    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:33.325561    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:33.325635    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:33.336121    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:33.336189    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:33.347021    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:33.347089    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:33.357351    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:33.357409    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:33.367405    5136 logs.go:276] 0 containers: []
	W0816 10:38:33.367418    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:33.367476    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:33.377494    5136 logs.go:276] 0 containers: []
	W0816 10:38:33.377511    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:33.377518    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:33.377527    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:33.382093    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:33.382100    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:33.423524    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:33.423534    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:33.437343    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:33.437354    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:33.454595    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:33.454608    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:33.472140    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:33.472152    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:33.494669    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:33.494677    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:33.512574    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:33.512585    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:33.550408    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:33.550419    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:33.561664    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:33.561675    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:33.573194    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:33.573205    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:33.591441    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:33.591450    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:33.630306    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:33.630317    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:33.644815    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:33.644825    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:33.659388    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:33.659399    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:36.173662    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:41.175827    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:41.176032    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:41.193879    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:41.193978    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:41.207833    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:41.207907    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:41.219066    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:41.219135    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:41.233042    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:41.233115    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:41.243601    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:41.243673    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:41.254118    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:41.254188    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:41.264601    5136 logs.go:276] 0 containers: []
	W0816 10:38:41.264616    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:41.264678    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:41.274728    5136 logs.go:276] 0 containers: []
	W0816 10:38:41.274739    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:41.274748    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:41.274753    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:41.311238    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:41.311248    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:41.360663    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:41.360685    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:41.375361    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:41.375374    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:41.389960    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:41.389976    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:41.403992    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:41.404008    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:41.418097    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:41.418111    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:41.429870    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:41.429882    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:41.445189    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:41.445203    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:41.449637    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:41.449643    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:41.483360    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:41.483375    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:41.500661    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:41.500677    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:41.523536    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:41.523545    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:41.535903    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:41.535915    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:41.555668    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:41.555684    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:44.069176    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:49.071346    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:49.071562    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:49.090704    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:49.090794    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:49.104660    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:49.104731    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:49.115935    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:49.116006    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:49.126212    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:49.126279    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:49.137074    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:49.137148    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:49.147667    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:49.147730    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:49.157431    5136 logs.go:276] 0 containers: []
	W0816 10:38:49.157442    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:49.157498    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:49.167451    5136 logs.go:276] 0 containers: []
	W0816 10:38:49.167462    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:49.167470    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:49.167476    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:49.210870    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:49.210887    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:49.225108    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:49.225119    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:49.237360    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:49.237371    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:49.249314    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:49.249329    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:49.262932    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:49.262945    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:49.296880    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:49.296896    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:49.311529    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:49.311540    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:49.334183    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:49.334190    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:49.338152    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:49.338161    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:49.354336    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:49.354347    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:49.392725    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:49.392739    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:49.407175    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:49.407191    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:49.420322    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:49.420333    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:49.437530    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:49.437540    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:51.952529    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:38:56.954768    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:38:56.955136    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:38:56.989762    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:38:56.989892    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:38:57.008668    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:38:57.008765    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:38:57.023034    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:38:57.023114    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:38:57.035602    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:38:57.035675    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:38:57.047477    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:38:57.047550    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:38:57.058403    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:38:57.058476    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:38:57.068736    5136 logs.go:276] 0 containers: []
	W0816 10:38:57.068753    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:38:57.068817    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:38:57.079277    5136 logs.go:276] 0 containers: []
	W0816 10:38:57.079291    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:38:57.079298    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:38:57.079304    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:38:57.090422    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:38:57.090436    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:38:57.102322    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:38:57.102333    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:38:57.115749    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:38:57.115759    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:38:57.130663    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:38:57.130676    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:38:57.142646    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:38:57.142660    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:38:57.154113    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:38:57.154125    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:38:57.171530    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:38:57.171541    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:38:57.206107    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:38:57.206120    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:38:57.243917    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:38:57.243929    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:38:57.258075    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:38:57.258086    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:38:57.273596    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:38:57.273606    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:38:57.292478    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:38:57.292489    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:38:57.315539    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:38:57.315554    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:38:57.353238    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:38:57.353247    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:38:59.858685    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:04.860155    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:04.860362    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:04.885582    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:39:04.885700    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:04.902073    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:39:04.902166    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:04.916246    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:39:04.916321    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:04.927142    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:39:04.927207    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:04.937856    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:39:04.937926    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:04.951971    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:39:04.952038    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:04.961857    5136 logs.go:276] 0 containers: []
	W0816 10:39:04.961869    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:04.961926    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:04.971660    5136 logs.go:276] 0 containers: []
	W0816 10:39:04.971675    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:39:04.971682    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:04.971687    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:05.008544    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:39:05.008555    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:39:05.021875    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:05.021885    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:05.026109    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:39:05.026116    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:39:05.040600    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:39:05.040610    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:39:05.055278    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:39:05.055290    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:39:05.070743    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:39:05.070754    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:39:05.088957    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:05.088968    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:05.112603    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:05.112611    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:05.146752    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:39:05.146763    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:39:05.192542    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:39:05.192556    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:39:05.207035    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:39:05.207046    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:39:05.222336    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:39:05.222347    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:39:05.234758    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:39:05.234771    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:05.246344    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:39:05.246356    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:39:07.760709    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:12.761966    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:12.762185    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:39:12.786461    5136 logs.go:276] 2 containers: [6f87224f6deb 9533d81142ad]
	I0816 10:39:12.786550    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:39:12.798023    5136 logs.go:276] 2 containers: [cc00c134823c 5db973a16a19]
	I0816 10:39:12.798096    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:39:12.808684    5136 logs.go:276] 1 containers: [0ad6370357cb]
	I0816 10:39:12.808765    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:39:12.819515    5136 logs.go:276] 2 containers: [699b224f21ac 44ae055ab8e7]
	I0816 10:39:12.819585    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:39:12.830395    5136 logs.go:276] 1 containers: [bd798273808e]
	I0816 10:39:12.830463    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:39:12.843062    5136 logs.go:276] 2 containers: [1edb09879be0 c96cfddd42cc]
	I0816 10:39:12.843134    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:39:12.853564    5136 logs.go:276] 0 containers: []
	W0816 10:39:12.853580    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:39:12.853644    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:39:12.868251    5136 logs.go:276] 0 containers: []
	W0816 10:39:12.868262    5136 logs.go:278] No container was found matching "storage-provisioner"
	I0816 10:39:12.868269    5136 logs.go:123] Gathering logs for etcd [cc00c134823c] ...
	I0816 10:39:12.868274    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc00c134823c"
	I0816 10:39:12.882098    5136 logs.go:123] Gathering logs for kube-controller-manager [1edb09879be0] ...
	I0816 10:39:12.882109    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1edb09879be0"
	I0816 10:39:12.899458    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:39:12.899468    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:39:12.923329    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:39:12.923338    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:39:12.961552    5136 logs.go:123] Gathering logs for kube-apiserver [9533d81142ad] ...
	I0816 10:39:12.961560    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9533d81142ad"
	I0816 10:39:12.998861    5136 logs.go:123] Gathering logs for coredns [0ad6370357cb] ...
	I0816 10:39:12.998871    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ad6370357cb"
	I0816 10:39:13.010973    5136 logs.go:123] Gathering logs for kube-scheduler [44ae055ab8e7] ...
	I0816 10:39:13.010984    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44ae055ab8e7"
	I0816 10:39:13.025907    5136 logs.go:123] Gathering logs for kube-proxy [bd798273808e] ...
	I0816 10:39:13.025918    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd798273808e"
	I0816 10:39:13.037920    5136 logs.go:123] Gathering logs for kube-controller-manager [c96cfddd42cc] ...
	I0816 10:39:13.037933    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c96cfddd42cc"
	I0816 10:39:13.057790    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:39:13.057805    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:39:13.061907    5136 logs.go:123] Gathering logs for kube-apiserver [6f87224f6deb] ...
	I0816 10:39:13.061913    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f87224f6deb"
	I0816 10:39:13.078883    5136 logs.go:123] Gathering logs for etcd [5db973a16a19] ...
	I0816 10:39:13.078893    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5db973a16a19"
	I0816 10:39:13.093779    5136 logs.go:123] Gathering logs for kube-scheduler [699b224f21ac] ...
	I0816 10:39:13.093793    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 699b224f21ac"
	I0816 10:39:13.108242    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:39:13.108258    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:39:13.119841    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:39:13.119853    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:39:15.656022    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:20.657610    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:20.657780    5136 kubeadm.go:597] duration metric: took 4m3.396963125s to restartPrimaryControlPlane
	W0816 10:39:20.657859    5136 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0816 10:39:20.657905    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0816 10:39:21.598842    5136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 10:39:21.603939    5136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 10:39:21.606871    5136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 10:39:21.609610    5136 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 10:39:21.609619    5136 kubeadm.go:157] found existing configuration files:
	
	I0816 10:39:21.609639    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0816 10:39:21.612053    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 10:39:21.612072    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 10:39:21.614783    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0816 10:39:21.617164    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 10:39:21.617185    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 10:39:21.620146    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0816 10:39:21.623295    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 10:39:21.623317    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 10:39:21.626556    5136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0816 10:39:21.629272    5136 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 10:39:21.629297    5136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 10:39:21.632178    5136 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0816 10:39:21.648662    5136 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0816 10:39:21.648696    5136 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 10:39:21.696553    5136 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 10:39:21.696644    5136 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 10:39:21.696699    5136 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0816 10:39:21.752680    5136 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 10:39:21.756839    5136 out.go:235]   - Generating certificates and keys ...
	I0816 10:39:21.756963    5136 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 10:39:21.757015    5136 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 10:39:21.757052    5136 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0816 10:39:21.757085    5136 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0816 10:39:21.757165    5136 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0816 10:39:21.757198    5136 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0816 10:39:21.757229    5136 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0816 10:39:21.757261    5136 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0816 10:39:21.757306    5136 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0816 10:39:21.757369    5136 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0816 10:39:21.757393    5136 kubeadm.go:310] [certs] Using the existing "sa" key
	I0816 10:39:21.757421    5136 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 10:39:21.844193    5136 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 10:39:21.917649    5136 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 10:39:22.091503    5136 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 10:39:22.294140    5136 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 10:39:22.326802    5136 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 10:39:22.327303    5136 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 10:39:22.327356    5136 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 10:39:22.415399    5136 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 10:39:22.423685    5136 out.go:235]   - Booting up control plane ...
	I0816 10:39:22.423735    5136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 10:39:22.423777    5136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 10:39:22.423812    5136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 10:39:22.423853    5136 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 10:39:22.423924    5136 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0816 10:39:26.929633    5136 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505730 seconds
	I0816 10:39:26.929714    5136 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 10:39:26.934595    5136 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 10:39:27.444340    5136 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 10:39:27.444564    5136 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-403000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 10:39:27.949128    5136 kubeadm.go:310] [bootstrap-token] Using token: sa33xc.0uhd5ykuoldhwzac
	I0816 10:39:27.955477    5136 out.go:235]   - Configuring RBAC rules ...
	I0816 10:39:27.955534    5136 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 10:39:27.955590    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 10:39:27.957355    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 10:39:27.962114    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 10:39:27.963400    5136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 10:39:27.964291    5136 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 10:39:27.967240    5136 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 10:39:28.169769    5136 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 10:39:28.352524    5136 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 10:39:28.352900    5136 kubeadm.go:310] 
	I0816 10:39:28.352929    5136 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 10:39:28.352958    5136 kubeadm.go:310] 
	I0816 10:39:28.352999    5136 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 10:39:28.353003    5136 kubeadm.go:310] 
	I0816 10:39:28.353031    5136 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 10:39:28.353065    5136 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 10:39:28.353102    5136 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 10:39:28.353105    5136 kubeadm.go:310] 
	I0816 10:39:28.353135    5136 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 10:39:28.353138    5136 kubeadm.go:310] 
	I0816 10:39:28.353160    5136 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 10:39:28.353164    5136 kubeadm.go:310] 
	I0816 10:39:28.353189    5136 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 10:39:28.353228    5136 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 10:39:28.353274    5136 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 10:39:28.353277    5136 kubeadm.go:310] 
	I0816 10:39:28.353323    5136 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 10:39:28.353361    5136 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 10:39:28.353365    5136 kubeadm.go:310] 
	I0816 10:39:28.353413    5136 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sa33xc.0uhd5ykuoldhwzac \
	I0816 10:39:28.353470    5136 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 \
	I0816 10:39:28.353480    5136 kubeadm.go:310] 	--control-plane 
	I0816 10:39:28.353483    5136 kubeadm.go:310] 
	I0816 10:39:28.353529    5136 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 10:39:28.353535    5136 kubeadm.go:310] 
	I0816 10:39:28.353572    5136 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sa33xc.0uhd5ykuoldhwzac \
	I0816 10:39:28.353620    5136 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3dbef51adc186d93171c6716e4c9d3e67358220996635d2d9ed7318abf8b1c24 
	I0816 10:39:28.356203    5136 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 10:39:28.356295    5136 cni.go:84] Creating CNI manager for ""
	I0816 10:39:28.356305    5136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:39:28.360055    5136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 10:39:28.367051    5136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0816 10:39:28.369909    5136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0816 10:39:28.374620    5136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 10:39:28.374676    5136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 10:39:28.374678    5136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-403000 minikube.k8s.io/updated_at=2024_08_16T10_39_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=stopped-upgrade-403000 minikube.k8s.io/primary=true
	I0816 10:39:28.379841    5136 ops.go:34] apiserver oom_adj: -16
	I0816 10:39:28.417340    5136 kubeadm.go:1113] duration metric: took 42.695792ms to wait for elevateKubeSystemPrivileges
	I0816 10:39:28.417357    5136 kubeadm.go:394] duration metric: took 4m11.170290708s to StartCluster
	I0816 10:39:28.417368    5136 settings.go:142] acquiring lock: {Name:mkd2048b6677d6c95a407663b8dc541f5fa54e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:39:28.417460    5136 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:39:28.417923    5136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/kubeconfig: {Name:mk2e4f2b039616ddb85ed20d74e703a928518229 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:39:28.418151    5136 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:39:28.418159    5136 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 10:39:28.418197    5136 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-403000"
	I0816 10:39:28.418209    5136 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-403000"
	W0816 10:39:28.418212    5136 addons.go:243] addon storage-provisioner should already be in state true
	I0816 10:39:28.418224    5136 host.go:66] Checking if "stopped-upgrade-403000" exists ...
	I0816 10:39:28.418217    5136 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-403000"
	I0816 10:39:28.418239    5136 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:39:28.418297    5136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-403000"
	I0816 10:39:28.422054    5136 out.go:177] * Verifying Kubernetes components...
	I0816 10:39:28.422931    5136 kapi.go:59] client config for stopped-upgrade-403000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/stopped-upgrade-403000/client.key", CAFile:"/Users/jenkins/minikube-integration/19461-1189/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101a3d610), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 10:39:28.426304    5136 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-403000"
	W0816 10:39:28.426309    5136 addons.go:243] addon default-storageclass should already be in state true
	I0816 10:39:28.426316    5136 host.go:66] Checking if "stopped-upgrade-403000" exists ...
	I0816 10:39:28.426816    5136 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 10:39:28.426822    5136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 10:39:28.426827    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:39:28.430034    5136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 10:39:28.434024    5136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 10:39:28.438055    5136 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:39:28.438061    5136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 10:39:28.438068    5136 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/stopped-upgrade-403000/id_rsa Username:docker}
	I0816 10:39:28.528239    5136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 10:39:28.533536    5136 api_server.go:52] waiting for apiserver process to appear ...
	I0816 10:39:28.533576    5136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 10:39:28.537428    5136 api_server.go:72] duration metric: took 119.266959ms to wait for apiserver process to appear ...
	I0816 10:39:28.537436    5136 api_server.go:88] waiting for apiserver healthz status ...
	I0816 10:39:28.537444    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:28.593132    5136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 10:39:28.611764    5136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 10:39:28.959480    5136 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0816 10:39:28.959496    5136 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0816 10:39:33.539458    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:33.539500    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:38.540151    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:38.540167    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:43.540463    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:43.540491    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:48.540959    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:48.540997    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:53.541676    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:53.541723    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:39:58.543099    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:39:58.543145    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0816 10:39:58.961225    5136 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0816 10:39:58.965638    5136 out.go:177] * Enabled addons: storage-provisioner
	I0816 10:39:58.973507    5136 addons.go:510] duration metric: took 30.556081958s for enable addons: enabled=[storage-provisioner]
	I0816 10:40:03.544401    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:03.544518    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:08.546188    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:08.546224    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:13.547989    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:13.548033    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:18.550162    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:18.550193    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:23.552250    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:23.552286    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:28.554591    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:28.554753    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:40:28.572648    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:40:28.572718    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:40:28.591437    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:40:28.591499    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:40:28.601990    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:40:28.602056    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:40:28.612408    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:40:28.612467    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:40:28.623148    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:40:28.623219    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:40:28.634712    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:40:28.634775    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:40:28.644931    5136 logs.go:276] 0 containers: []
	W0816 10:40:28.644940    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:40:28.644986    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:40:28.655737    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:40:28.655749    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:40:28.655754    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:40:28.667927    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:40:28.667937    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:40:28.680107    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:40:28.680116    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:40:28.695812    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:40:28.695827    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:40:28.707197    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:40:28.707210    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:40:28.711760    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:40:28.711768    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:40:28.747092    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:40:28.747102    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:40:28.761881    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:40:28.761890    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:40:28.779439    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:40:28.779454    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:40:28.791408    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:40:28.791418    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:40:28.815968    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:40:28.815976    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:40:28.828157    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:40:28.828171    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:40:28.863100    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:40:28.863109    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:40:31.380104    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:36.382474    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:36.382895    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:40:36.421485    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:40:36.421618    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:40:36.442899    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:40:36.442991    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:40:36.463223    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:40:36.463302    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:40:36.474787    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:40:36.474854    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:40:36.485063    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:40:36.485124    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:40:36.503051    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:40:36.503118    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:40:36.513319    5136 logs.go:276] 0 containers: []
	W0816 10:40:36.513334    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:40:36.513390    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:40:36.523872    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:40:36.523893    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:40:36.523898    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:40:36.548698    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:40:36.548707    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:40:36.563031    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:40:36.563041    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:40:36.578046    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:40:36.578056    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:40:36.589645    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:40:36.589656    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:40:36.601282    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:40:36.601294    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:40:36.618617    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:40:36.618628    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:40:36.630375    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:40:36.630386    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:40:36.663768    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:40:36.663776    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:40:36.667905    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:40:36.667911    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:40:36.702157    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:40:36.702167    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:40:36.716392    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:40:36.716403    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:40:36.727573    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:40:36.727583    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:40:39.239021    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:44.241710    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:44.242105    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:40:44.278567    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:40:44.278686    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:40:44.297080    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:40:44.297161    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:40:44.310998    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:40:44.311071    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:40:44.322358    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:40:44.322420    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:40:44.333072    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:40:44.333135    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:40:44.343927    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:40:44.343987    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:40:44.358588    5136 logs.go:276] 0 containers: []
	W0816 10:40:44.358597    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:40:44.358645    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:40:44.368927    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:40:44.368940    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:40:44.368945    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:40:44.383810    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:40:44.383822    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:40:44.398098    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:40:44.398107    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:40:44.409890    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:40:44.409901    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:40:44.427492    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:40:44.427503    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:40:44.451738    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:40:44.451747    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:40:44.485188    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:40:44.485195    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:40:44.519463    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:40:44.519479    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:40:44.534121    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:40:44.534132    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:40:44.545553    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:40:44.545562    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:40:44.556991    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:40:44.557001    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:40:44.568068    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:40:44.568078    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:40:44.580768    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:40:44.580779    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:40:47.086917    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:52.089572    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:52.089779    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:40:52.115096    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:40:52.115194    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:40:52.130980    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:40:52.131062    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:40:52.144414    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:40:52.144482    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:40:52.163497    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:40:52.163563    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:40:52.173976    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:40:52.174046    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:40:52.185204    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:40:52.185261    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:40:52.199627    5136 logs.go:276] 0 containers: []
	W0816 10:40:52.199638    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:40:52.199693    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:40:52.210332    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:40:52.210348    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:40:52.210353    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:40:52.221979    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:40:52.221993    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:40:52.241431    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:40:52.241443    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:40:52.265193    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:40:52.265205    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:40:52.269705    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:40:52.269714    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:40:52.303957    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:40:52.303970    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:40:52.318250    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:40:52.318263    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:40:52.335604    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:40:52.335617    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:40:52.347637    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:40:52.347650    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:40:52.381004    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:40:52.381012    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:40:52.396164    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:40:52.396174    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:40:52.413761    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:40:52.413773    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:40:52.425346    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:40:52.425356    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:40:54.938540    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:40:59.941008    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:40:59.941531    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:40:59.980424    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:40:59.980546    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:00.001776    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:00.001901    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:00.016915    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:41:00.016983    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:00.032354    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:00.032418    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:00.042998    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:00.043073    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:00.054646    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:00.054710    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:00.069447    5136 logs.go:276] 0 containers: []
	W0816 10:41:00.069460    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:00.069518    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:00.080082    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:00.080097    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:00.080103    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:00.094470    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:00.094483    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:00.105912    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:00.105924    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:00.129548    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:00.129555    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:00.133452    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:00.133461    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:00.144880    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:00.144892    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:00.160817    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:00.160830    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:00.174754    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:00.174762    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:00.186898    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:00.186907    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:00.198624    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:00.198640    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:00.217316    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:00.217325    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:00.231477    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:00.231493    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:00.267198    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:00.267217    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:02.805627    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:41:07.808300    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:41:07.808735    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:41:07.848974    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:41:07.849114    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:07.870726    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:07.870831    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:07.888601    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:41:07.888676    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:07.900839    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:07.900904    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:07.911572    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:07.911641    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:07.922994    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:07.923070    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:07.935547    5136 logs.go:276] 0 containers: []
	W0816 10:41:07.935558    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:07.935614    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:07.947275    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:07.947289    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:07.947294    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:07.959089    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:07.959098    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:07.970791    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:07.970804    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:07.982336    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:07.982351    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:07.997223    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:07.997236    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:08.009586    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:08.009600    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:08.044178    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:08.044189    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:08.058345    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:08.058357    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:08.073299    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:08.073309    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:08.097243    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:08.097257    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:08.118564    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:08.118576    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:08.141652    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:08.141661    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:08.174737    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:08.174747    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:10.680766    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:41:15.683120    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:41:15.683603    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:41:15.725373    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:41:15.725494    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:15.747618    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:15.747709    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:15.762670    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:41:15.762742    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:15.775256    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:15.775325    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:15.786675    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:15.786744    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:15.797643    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:15.797710    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:15.811950    5136 logs.go:276] 0 containers: []
	W0816 10:41:15.811961    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:15.812017    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:15.823392    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:15.823413    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:15.823418    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:15.839448    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:15.839457    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:15.856442    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:15.856453    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:15.871447    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:15.871459    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:15.906883    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:15.906891    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:15.911158    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:15.911168    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:15.948923    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:15.948937    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:15.963915    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:15.963926    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:15.978129    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:15.978139    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:15.990185    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:15.990199    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:16.005929    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:16.005939    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:16.025766    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:16.025777    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:16.050681    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:16.050688    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:18.563346    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:41:23.564540    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:41:23.565086    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:41:23.605791    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:41:23.605919    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:23.627495    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:23.627572    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:23.643414    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:41:23.643481    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:23.655911    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:23.655971    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:23.667062    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:23.667131    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:23.677744    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:23.677808    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:23.692987    5136 logs.go:276] 0 containers: []
	W0816 10:41:23.693003    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:23.693056    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:23.704956    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:23.704971    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:23.704979    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:23.717537    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:23.717547    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:23.732480    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:23.732492    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:23.744704    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:23.744716    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:23.762682    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:23.762692    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:23.786894    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:23.786901    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:23.798664    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:23.798678    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:23.802726    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:23.802734    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:23.838152    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:23.838163    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:23.852504    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:23.852516    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:23.864558    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:23.864571    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:23.876866    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:23.876878    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:23.911894    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:23.911902    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:26.428454    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:41:31.431179    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:41:31.431445    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:41:31.461053    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:41:31.461177    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:31.479828    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:31.479912    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:31.493870    5136 logs.go:276] 2 containers: [8e1fa133e33c 2ac4e79994af]
	I0816 10:41:31.493944    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:31.506073    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:31.506143    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:31.517472    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:31.517537    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:31.528121    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:31.528189    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:31.538900    5136 logs.go:276] 0 containers: []
	W0816 10:41:31.538911    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:31.538970    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:31.549691    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:31.549708    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:31.549713    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:31.564829    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:31.564838    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:31.579109    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:31.579118    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:31.594192    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:31.594203    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:31.606572    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:31.606583    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:31.641981    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:31.641993    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:31.646375    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:31.646381    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:31.658314    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:31.658325    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:31.670707    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:31.670718    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:31.688709    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:31.688719    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:31.700584    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:31.700595    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:31.726248    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:31.726255    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:31.738884    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:31.738898    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:34.275921    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:41:39.276422    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:41:39.276644    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:41:39.303038    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:41:39.303108    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:39.325631    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:39.325688    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:39.343355    5136 logs.go:276] 3 containers: [44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:41:39.343417    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:39.358739    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:39.358805    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:39.379573    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:39.379621    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:39.391091    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:39.391155    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:39.401822    5136 logs.go:276] 0 containers: []
	W0816 10:41:39.401830    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:39.401882    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:39.419320    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:39.419336    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:39.419341    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:39.434679    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:41:39.434689    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:41:39.451011    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:39.451021    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:39.468185    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:39.468193    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:39.497465    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:39.497479    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:39.530933    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:39.530946    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:39.535376    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:39.535386    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:39.549948    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:39.549960    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:39.567424    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:39.567433    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:39.601727    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:39.601735    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:39.636427    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:39.636438    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:39.648369    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:39.648379    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:39.663922    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:39.663932    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:39.687517    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:39.687524    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:42.201059    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:41:47.203704    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:41:47.204101    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:41:47.249321    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:41:47.249448    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:47.268461    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:47.268538    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:47.283153    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:41:47.283223    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:47.295632    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:47.295700    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:47.306194    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:47.306255    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:47.317936    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:47.318017    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:47.328086    5136 logs.go:276] 0 containers: []
	W0816 10:41:47.328096    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:47.328145    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:47.339648    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:47.339667    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:47.339673    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:47.343932    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:47.343941    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:47.357651    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:41:47.357662    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:41:47.375653    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:47.375663    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:47.388176    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:47.388187    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:47.399726    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:47.399738    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:47.414553    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:47.414566    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:47.448792    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:41:47.448804    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:41:47.460345    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:47.460356    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:47.471750    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:47.471760    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:47.495721    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:47.495731    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:47.508287    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:47.508296    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:47.542914    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:47.542922    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:47.557124    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:47.557133    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:47.573846    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:47.573857    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:50.093405    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:41:55.096252    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:41:55.096326    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:41:55.108277    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:41:55.108323    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:41:55.119077    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:41:55.119130    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:41:55.130372    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:41:55.130429    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:41:55.143382    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:41:55.143437    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:41:55.154654    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:41:55.154703    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:41:55.165260    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:41:55.165312    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:41:55.176702    5136 logs.go:276] 0 containers: []
	W0816 10:41:55.176712    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:41:55.176787    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:41:55.188524    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:41:55.188543    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:41:55.188548    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:41:55.227758    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:41:55.227769    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:41:55.240306    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:41:55.240317    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:41:55.277385    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:41:55.277396    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:41:55.293341    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:41:55.293351    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:41:55.306424    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:41:55.306435    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:41:55.325187    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:41:55.325201    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:41:55.341895    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:41:55.341906    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:41:55.354755    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:41:55.354769    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:41:55.368157    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:41:55.368168    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:41:55.395563    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:41:55.395580    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:41:55.410245    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:41:55.410258    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:41:55.414943    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:41:55.414951    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:41:55.429533    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:41:55.429544    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:41:55.442411    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:41:55.442427    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:41:57.960204    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:02.962975    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:02.963369    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:03.004167    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:03.004302    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:03.026460    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:03.026568    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:03.041649    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:03.041721    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:03.054657    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:03.054727    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:03.065587    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:03.065647    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:03.076716    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:03.076779    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:03.087504    5136 logs.go:276] 0 containers: []
	W0816 10:42:03.087518    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:03.087571    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:03.097961    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:03.097980    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:03.097984    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:03.115486    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:03.115498    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:03.127737    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:03.127749    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:03.132641    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:03.132650    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:03.144983    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:03.144996    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:03.156861    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:03.156874    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:03.190853    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:03.190861    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:03.203537    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:03.203547    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:03.215842    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:03.215856    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:42:03.232634    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:03.232646    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:03.267260    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:03.267272    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:03.282246    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:03.282258    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:03.296130    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:03.296142    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:03.308099    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:03.308110    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:03.319823    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:03.319834    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:05.846901    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:10.849718    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:10.850183    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:10.888361    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:10.888479    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:10.916164    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:10.916263    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:10.931097    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:10.931177    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:10.943049    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:10.943117    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:10.953705    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:10.953774    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:10.964009    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:10.964069    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:10.974528    5136 logs.go:276] 0 containers: []
	W0816 10:42:10.974541    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:10.974596    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:10.984578    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:10.984597    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:10.984601    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:11.001975    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:11.001988    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:11.030082    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:11.030092    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:11.045855    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:11.045866    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:11.057715    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:11.057728    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:11.062309    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:11.062316    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:11.096055    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:11.096067    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:11.110724    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:11.110736    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:11.122273    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:11.122282    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:42:11.137342    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:11.137354    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:11.172762    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:11.172770    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:11.189999    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:11.190011    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:11.201211    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:11.201224    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:11.212595    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:11.212607    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:11.231117    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:11.231130    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:13.748919    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:18.751587    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:18.751820    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:18.771251    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:18.771347    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:18.789452    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:18.789525    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:18.801744    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:18.801812    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:18.812325    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:18.812389    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:18.822175    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:18.822232    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:18.832442    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:18.832513    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:18.842054    5136 logs.go:276] 0 containers: []
	W0816 10:42:18.842066    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:18.842127    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:18.852011    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:18.852029    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:18.852034    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:18.856829    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:18.856837    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:18.874296    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:18.874305    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:18.891575    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:18.891587    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:18.905769    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:18.905782    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:18.917362    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:18.917376    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:18.928690    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:18.928700    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:18.953739    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:18.953744    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:18.990435    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:18.990444    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:19.002014    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:19.002025    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:42:19.016540    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:19.016550    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:19.052192    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:19.052213    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:19.064833    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:19.064848    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:19.077679    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:19.077691    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:19.090141    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:19.090153    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:21.602918    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:26.605020    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:26.605123    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:26.622368    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:26.622445    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:26.633279    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:26.633348    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:26.643967    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:26.644031    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:26.654128    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:26.654203    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:26.667319    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:26.667377    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:26.679140    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:26.679218    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:26.689691    5136 logs.go:276] 0 containers: []
	W0816 10:42:26.689704    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:26.689756    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:26.703096    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:26.703115    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:26.703119    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:26.736159    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:26.736166    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:26.747669    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:26.747682    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:26.772774    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:26.772784    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:26.788478    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:26.788489    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:26.800099    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:26.800107    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:26.804615    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:26.804624    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:26.822219    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:26.822230    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:26.837563    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:26.837573    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:26.854924    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:26.854936    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:26.890037    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:26.890047    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:26.904452    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:26.904465    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:26.916152    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:26.916160    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:26.927891    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:26.927904    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:42:26.947444    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:26.947454    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:29.462394    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:34.464623    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:34.465055    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:34.501595    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:34.501721    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:34.523469    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:34.523575    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:34.539286    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:34.539377    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:34.551744    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:34.551815    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:34.562409    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:34.562476    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:34.573717    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:34.573783    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:34.584241    5136 logs.go:276] 0 containers: []
	W0816 10:42:34.584253    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:34.584309    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:34.594855    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:34.594870    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:34.594875    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:34.606687    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:34.606700    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:34.618632    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:34.618647    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:34.623628    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:34.623637    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:34.637818    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:34.637828    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:34.651535    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:34.651544    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:34.662681    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:34.662697    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:34.673809    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:34.673819    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:34.697350    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:34.697358    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:34.731237    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:34.731249    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:34.745691    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:34.745701    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:42:34.760913    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:34.760926    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:34.772338    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:34.772347    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:34.789705    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:34.789715    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:34.824890    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:34.824900    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:37.341065    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:42.343882    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:42.344316    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:42.385856    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:42.385970    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:42.408701    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:42.408821    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:42.424244    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:42.424326    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:42.436496    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:42.436564    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:42.447915    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:42.447981    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:42.458735    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:42.458791    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:42.469460    5136 logs.go:276] 0 containers: []
	W0816 10:42:42.469472    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:42.469521    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:42.480202    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:42.480221    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:42.480226    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:42.494486    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:42.494495    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:42.506042    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:42.506052    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:42.521296    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:42.521308    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:42.545217    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:42.545227    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:42.549659    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:42.549667    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:42.583318    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:42.583329    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:42.597931    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:42.597943    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:42:42.612865    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:42.612874    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:42.624632    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:42.624641    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:42.636396    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:42.636409    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:42.670226    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:42.670236    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:42.681547    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:42.681561    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:42.693307    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:42.693319    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:42.711205    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:42.711216    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:45.234573    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:50.236713    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:50.237098    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:50.276643    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:50.276765    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:50.296118    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:50.296197    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:50.313143    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:50.313219    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:50.323979    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:50.324047    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:50.334647    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:50.334715    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:50.345667    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:50.345737    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:50.356735    5136 logs.go:276] 0 containers: []
	W0816 10:42:50.356746    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:50.356799    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:50.367558    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:50.367575    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:50.367581    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:50.372121    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:50.372127    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:50.386289    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:50.386301    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:50.398287    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:50.398300    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:50.410211    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:50.410224    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:50.422032    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:50.422044    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:50.446858    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:50.446865    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:50.481395    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:50.481402    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:50.502040    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:50.502050    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:50.514626    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:50.514640    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:50.550169    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:50.550179    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:50.565378    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:50.565388    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:50.577227    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:50.577239    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:50.588734    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:50.588744    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:42:50.602956    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:50.602969    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:53.115428    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:42:58.117389    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:42:58.117531    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:42:58.134149    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:42:58.134224    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:42:58.148840    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:42:58.148905    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:42:58.159842    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:42:58.159905    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:42:58.170374    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:42:58.170437    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:42:58.180464    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:42:58.180533    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:42:58.190966    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:42:58.191029    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:42:58.204071    5136 logs.go:276] 0 containers: []
	W0816 10:42:58.204089    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:42:58.204143    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:42:58.214206    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:42:58.214222    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:42:58.214228    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:42:58.251821    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:42:58.251832    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:42:58.287920    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:42:58.287933    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:42:58.299694    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:42:58.299707    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:42:58.311509    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:42:58.311521    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:42:58.322908    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:42:58.322922    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:42:58.334075    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:42:58.334085    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:42:58.345994    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:42:58.346006    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:42:58.350124    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:42:58.350132    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:42:58.364350    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:42:58.364364    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:42:58.382970    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:42:58.382981    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:42:58.400438    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:42:58.400450    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:42:58.425790    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:42:58.425797    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:42:58.440444    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:42:58.440454    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:42:58.452272    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:42:58.452282    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:43:00.968878    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:43:05.971455    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:43:05.971857    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:43:06.025166    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:43:06.025271    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:43:06.051106    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:43:06.051185    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:43:06.062656    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:43:06.062721    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:43:06.073488    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:43:06.073547    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:43:06.083542    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:43:06.083596    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:43:06.094402    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:43:06.094461    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:43:06.105425    5136 logs.go:276] 0 containers: []
	W0816 10:43:06.105437    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:43:06.105487    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:43:06.115666    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:43:06.115682    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:43:06.115687    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:43:06.152273    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:43:06.152285    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:43:06.166091    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:43:06.166104    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:43:06.178844    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:43:06.178856    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:43:06.211952    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:43:06.211961    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:43:06.226047    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:43:06.226059    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:43:06.238453    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:43:06.238464    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:43:06.242626    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:43:06.242630    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:43:06.256283    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:43:06.256295    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:43:06.268605    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:43:06.268617    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:43:06.282846    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:43:06.282857    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:43:06.297544    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:43:06.297554    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:43:06.315136    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:43:06.315147    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:43:06.327008    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:43:06.327020    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:43:06.349792    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:43:06.349799    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:43:08.863684    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:43:13.865850    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:43:13.866298    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:43:13.907685    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:43:13.907801    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:43:13.929927    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:43:13.930032    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:43:13.945330    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:43:13.945410    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:43:13.958094    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:43:13.958161    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:43:13.969402    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:43:13.969467    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:43:13.979983    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:43:13.980039    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:43:13.991068    5136 logs.go:276] 0 containers: []
	W0816 10:43:13.991079    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:43:13.991128    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:43:14.001329    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:43:14.001350    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:43:14.001354    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:43:14.036619    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:43:14.036628    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:43:14.048475    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:43:14.048489    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:43:14.072331    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:43:14.072340    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:43:14.084416    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:43:14.084427    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:43:14.103391    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:43:14.103404    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:43:14.114951    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:43:14.114961    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:43:14.130965    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:43:14.130982    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:43:14.143603    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:43:14.143617    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:43:14.148005    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:43:14.148013    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:43:14.207406    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:43:14.207417    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:43:14.229602    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:43:14.229614    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:43:14.242675    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:43:14.242685    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:43:14.262663    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:43:14.262672    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:43:14.280746    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:43:14.280755    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:43:16.794498    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:43:21.796642    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:43:21.796686    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0816 10:43:21.808137    5136 logs.go:276] 1 containers: [4baeec8326c4]
	I0816 10:43:21.808217    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0816 10:43:21.818769    5136 logs.go:276] 1 containers: [e82b152646b2]
	I0816 10:43:21.818831    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0816 10:43:21.830043    5136 logs.go:276] 4 containers: [ab3aedd7d461 44adbf50c2d2 8e1fa133e33c 2ac4e79994af]
	I0816 10:43:21.830095    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0816 10:43:21.840747    5136 logs.go:276] 1 containers: [3a6d424d2d56]
	I0816 10:43:21.840800    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0816 10:43:21.856754    5136 logs.go:276] 1 containers: [f760a8aa610c]
	I0816 10:43:21.856810    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0816 10:43:21.868875    5136 logs.go:276] 1 containers: [b16cfba3bb85]
	I0816 10:43:21.868939    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0816 10:43:21.880398    5136 logs.go:276] 0 containers: []
	W0816 10:43:21.880405    5136 logs.go:278] No container was found matching "kindnet"
	I0816 10:43:21.880440    5136 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0816 10:43:21.892357    5136 logs.go:276] 1 containers: [82cad186e3d0]
	I0816 10:43:21.892380    5136 logs.go:123] Gathering logs for etcd [e82b152646b2] ...
	I0816 10:43:21.892386    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e82b152646b2"
	I0816 10:43:21.909875    5136 logs.go:123] Gathering logs for coredns [8e1fa133e33c] ...
	I0816 10:43:21.909884    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e1fa133e33c"
	I0816 10:43:21.921698    5136 logs.go:123] Gathering logs for coredns [44adbf50c2d2] ...
	I0816 10:43:21.921707    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44adbf50c2d2"
	I0816 10:43:21.933796    5136 logs.go:123] Gathering logs for kube-controller-manager [b16cfba3bb85] ...
	I0816 10:43:21.933817    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b16cfba3bb85"
	I0816 10:43:21.952601    5136 logs.go:123] Gathering logs for container status ...
	I0816 10:43:21.952610    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 10:43:21.964865    5136 logs.go:123] Gathering logs for coredns [2ac4e79994af] ...
	I0816 10:43:21.964876    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ac4e79994af"
	I0816 10:43:21.979434    5136 logs.go:123] Gathering logs for kube-scheduler [3a6d424d2d56] ...
	I0816 10:43:21.979444    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a6d424d2d56"
	I0816 10:43:21.994755    5136 logs.go:123] Gathering logs for kube-proxy [f760a8aa610c] ...
	I0816 10:43:21.994774    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f760a8aa610c"
	I0816 10:43:22.008828    5136 logs.go:123] Gathering logs for kubelet ...
	I0816 10:43:22.008842    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0816 10:43:22.044467    5136 logs.go:123] Gathering logs for dmesg ...
	I0816 10:43:22.044483    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 10:43:22.049684    5136 logs.go:123] Gathering logs for describe nodes ...
	I0816 10:43:22.049697    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 10:43:22.090820    5136 logs.go:123] Gathering logs for kube-apiserver [4baeec8326c4] ...
	I0816 10:43:22.090833    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4baeec8326c4"
	I0816 10:43:22.107229    5136 logs.go:123] Gathering logs for coredns [ab3aedd7d461] ...
	I0816 10:43:22.107242    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab3aedd7d461"
	I0816 10:43:22.122311    5136 logs.go:123] Gathering logs for storage-provisioner [82cad186e3d0] ...
	I0816 10:43:22.122323    5136 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82cad186e3d0"
	I0816 10:43:22.136772    5136 logs.go:123] Gathering logs for Docker ...
	I0816 10:43:22.136784    5136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0816 10:43:24.664798    5136 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0816 10:43:29.667057    5136 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 10:43:29.674501    5136 out.go:201] 
	W0816 10:43:29.679489    5136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0816 10:43:29.679498    5136 out.go:270] * 
	* 
	W0816 10:43:29.680196    5136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:43:29.696450    5136 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-403000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.09s)

                                                
                                    
x
+
TestPause/serial/Start (10.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-626000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-626000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.977049875s)

                                                
                                                
-- stdout --
	* [pause-626000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-626000" primary control-plane node in "pause-626000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-626000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-626000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-626000 -n pause-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-626000 -n pause-626000: exit status 7 (65.993916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-283000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-283000 --driver=qemu2 : exit status 80 (9.792881833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-283000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-283000" primary control-plane node in "NoKubernetes-283000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-283000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-283000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-283000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000: exit status 7 (61.007541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-283000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239577291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-283000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-283000
	* Restarting existing qemu2 VM for "NoKubernetes-283000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-283000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-283000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000: exit status 7 (42.587166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-283000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244548291s)

                                                
                                                
-- stdout --
	* [NoKubernetes-283000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-283000
	* Restarting existing qemu2 VM for "NoKubernetes-283000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-283000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-283000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000: exit status 7 (37.881958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-283000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-283000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-283000 --driver=qemu2 : exit status 80 (5.265045833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-283000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-283000
	* Restarting existing qemu2 VM for "NoKubernetes-283000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-283000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-283000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-283000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-283000 -n NoKubernetes-283000: exit status 7 (35.328041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-283000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.967191417s)

                                                
                                                
-- stdout --
	* [auto-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-122000" primary control-plane node in "auto-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:41:44.794475    5433 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:41:44.794585    5433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:41:44.794588    5433 out.go:358] Setting ErrFile to fd 2...
	I0816 10:41:44.794590    5433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:41:44.794733    5433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:41:44.795844    5433 out.go:352] Setting JSON to false
	I0816 10:41:44.812290    5433 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4267,"bootTime":1723825837,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:41:44.812359    5433 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:41:44.818602    5433 out.go:177] * [auto-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:41:44.826550    5433 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:41:44.826649    5433 notify.go:220] Checking for updates...
	I0816 10:41:44.833537    5433 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:41:44.836536    5433 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:41:44.839537    5433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:41:44.842539    5433 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:41:44.845588    5433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:41:44.848915    5433 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:41:44.848988    5433 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:41:44.849031    5433 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:41:44.853571    5433 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:41:44.860556    5433 start.go:297] selected driver: qemu2
	I0816 10:41:44.860566    5433 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:41:44.860573    5433 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:41:44.862807    5433 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:41:44.865609    5433 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:41:44.868569    5433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:41:44.868611    5433 cni.go:84] Creating CNI manager for ""
	I0816 10:41:44.868621    5433 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:41:44.868625    5433 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:41:44.868653    5433 start.go:340] cluster config:
	{Name:auto-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:41:44.872435    5433 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:41:44.879558    5433 out.go:177] * Starting "auto-122000" primary control-plane node in "auto-122000" cluster
	I0816 10:41:44.883503    5433 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:41:44.883519    5433 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:41:44.883531    5433 cache.go:56] Caching tarball of preloaded images
	I0816 10:41:44.883605    5433 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:41:44.883612    5433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:41:44.883669    5433 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/auto-122000/config.json ...
	I0816 10:41:44.883680    5433 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/auto-122000/config.json: {Name:mkbab9e7065f9e2fd185001e65097bddd07f2d45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:41:44.883936    5433 start.go:360] acquireMachinesLock for auto-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:41:44.883970    5433 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "auto-122000"
	I0816 10:41:44.883981    5433 start.go:93] Provisioning new machine with config: &{Name:auto-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:41:44.884012    5433 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:41:44.892525    5433 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:41:44.909118    5433 start.go:159] libmachine.API.Create for "auto-122000" (driver="qemu2")
	I0816 10:41:44.909163    5433 client.go:168] LocalClient.Create starting
	I0816 10:41:44.909229    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:41:44.909259    5433 main.go:141] libmachine: Decoding PEM data...
	I0816 10:41:44.909267    5433 main.go:141] libmachine: Parsing certificate...
	I0816 10:41:44.909305    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:41:44.909328    5433 main.go:141] libmachine: Decoding PEM data...
	I0816 10:41:44.909336    5433 main.go:141] libmachine: Parsing certificate...
	I0816 10:41:44.909793    5433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:41:45.064352    5433 main.go:141] libmachine: Creating SSH key...
	I0816 10:41:45.199084    5433 main.go:141] libmachine: Creating Disk image...
	I0816 10:41:45.199097    5433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:41:45.199311    5433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2
	I0816 10:41:45.209315    5433 main.go:141] libmachine: STDOUT: 
	I0816 10:41:45.209338    5433 main.go:141] libmachine: STDERR: 
	I0816 10:41:45.209387    5433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2 +20000M
	I0816 10:41:45.217416    5433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:41:45.217432    5433 main.go:141] libmachine: STDERR: 
	I0816 10:41:45.217463    5433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2
	I0816 10:41:45.217470    5433 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:41:45.217484    5433 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:41:45.217506    5433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b2:a1:f1:f1:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2
	I0816 10:41:45.219161    5433 main.go:141] libmachine: STDOUT: 
	I0816 10:41:45.219176    5433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:41:45.219196    5433 client.go:171] duration metric: took 310.035166ms to LocalClient.Create
	I0816 10:41:47.221317    5433 start.go:128] duration metric: took 2.337332833s to createHost
	I0816 10:41:47.221391    5433 start.go:83] releasing machines lock for "auto-122000", held for 2.33746975s
	W0816 10:41:47.221432    5433 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:41:47.234960    5433 out.go:177] * Deleting "auto-122000" in qemu2 ...
	W0816 10:41:47.254152    5433 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:41:47.254165    5433 start.go:729] Will try again in 5 seconds ...
	I0816 10:41:52.256413    5433 start.go:360] acquireMachinesLock for auto-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:41:52.257033    5433 start.go:364] duration metric: took 468.125µs to acquireMachinesLock for "auto-122000"
	I0816 10:41:52.257171    5433 start.go:93] Provisioning new machine with config: &{Name:auto-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:41:52.257457    5433 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:41:52.267994    5433 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:41:52.309166    5433 start.go:159] libmachine.API.Create for "auto-122000" (driver="qemu2")
	I0816 10:41:52.309211    5433 client.go:168] LocalClient.Create starting
	I0816 10:41:52.309325    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:41:52.309378    5433 main.go:141] libmachine: Decoding PEM data...
	I0816 10:41:52.309408    5433 main.go:141] libmachine: Parsing certificate...
	I0816 10:41:52.309470    5433 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:41:52.309519    5433 main.go:141] libmachine: Decoding PEM data...
	I0816 10:41:52.309529    5433 main.go:141] libmachine: Parsing certificate...
	I0816 10:41:52.310254    5433 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:41:52.471619    5433 main.go:141] libmachine: Creating SSH key...
	I0816 10:41:52.670149    5433 main.go:141] libmachine: Creating Disk image...
	I0816 10:41:52.670159    5433 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:41:52.670393    5433 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2
	I0816 10:41:52.680236    5433 main.go:141] libmachine: STDOUT: 
	I0816 10:41:52.680256    5433 main.go:141] libmachine: STDERR: 
	I0816 10:41:52.680298    5433 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2 +20000M
	I0816 10:41:52.688677    5433 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:41:52.688692    5433 main.go:141] libmachine: STDERR: 
	I0816 10:41:52.688700    5433 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2
	I0816 10:41:52.688703    5433 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:41:52.688715    5433 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:41:52.688743    5433 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:f0:cd:6b:12:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/auto-122000/disk.qcow2
	I0816 10:41:52.690493    5433 main.go:141] libmachine: STDOUT: 
	I0816 10:41:52.690510    5433 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:41:52.690523    5433 client.go:171] duration metric: took 381.317ms to LocalClient.Create
	I0816 10:41:54.692576    5433 start.go:128] duration metric: took 2.435154458s to createHost
	I0816 10:41:54.692628    5433 start.go:83] releasing machines lock for "auto-122000", held for 2.435630125s
	W0816 10:41:54.692755    5433 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:41:54.706076    5433 out.go:201] 
	W0816 10:41:54.711124    5433 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:41:54.711137    5433 out.go:270] * 
	* 
	W0816 10:41:54.712358    5433 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:41:54.723094    5433 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.900447875s)

                                                
                                                
-- stdout --
	* [flannel-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-122000" primary control-plane node in "flannel-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:41:56.894743    5549 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:41:56.894877    5549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:41:56.894880    5549 out.go:358] Setting ErrFile to fd 2...
	I0816 10:41:56.894883    5549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:41:56.895022    5549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:41:56.896054    5549 out.go:352] Setting JSON to false
	I0816 10:41:56.912199    5549 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4279,"bootTime":1723825837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:41:56.912271    5549 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:41:56.919878    5549 out.go:177] * [flannel-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:41:56.929816    5549 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:41:56.929851    5549 notify.go:220] Checking for updates...
	I0816 10:41:56.936735    5549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:41:56.939783    5549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:41:56.942792    5549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:41:56.945727    5549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:41:56.948752    5549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:41:56.952140    5549 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:41:56.952210    5549 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:41:56.952259    5549 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:41:56.956742    5549 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:41:56.963795    5549 start.go:297] selected driver: qemu2
	I0816 10:41:56.963802    5549 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:41:56.963808    5549 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:41:56.966143    5549 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:41:56.968767    5549 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:41:56.971844    5549 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:41:56.971883    5549 cni.go:84] Creating CNI manager for "flannel"
	I0816 10:41:56.971888    5549 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0816 10:41:56.971917    5549 start.go:340] cluster config:
	{Name:flannel-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:41:56.975608    5549 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:41:56.982743    5549 out.go:177] * Starting "flannel-122000" primary control-plane node in "flannel-122000" cluster
	I0816 10:41:56.986733    5549 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:41:56.986747    5549 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:41:56.986756    5549 cache.go:56] Caching tarball of preloaded images
	I0816 10:41:56.986820    5549 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:41:56.986826    5549 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:41:56.986881    5549 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/flannel-122000/config.json ...
	I0816 10:41:56.986896    5549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/flannel-122000/config.json: {Name:mk6629bbbc9b34020fcf144950a6792b46d20cf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:41:56.987318    5549 start.go:360] acquireMachinesLock for flannel-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:41:56.987359    5549 start.go:364] duration metric: took 34.333µs to acquireMachinesLock for "flannel-122000"
	I0816 10:41:56.987373    5549 start.go:93] Provisioning new machine with config: &{Name:flannel-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:41:56.987403    5549 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:41:56.994756    5549 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:41:57.010137    5549 start.go:159] libmachine.API.Create for "flannel-122000" (driver="qemu2")
	I0816 10:41:57.010172    5549 client.go:168] LocalClient.Create starting
	I0816 10:41:57.010237    5549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:41:57.010273    5549 main.go:141] libmachine: Decoding PEM data...
	I0816 10:41:57.010290    5549 main.go:141] libmachine: Parsing certificate...
	I0816 10:41:57.010330    5549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:41:57.010353    5549 main.go:141] libmachine: Decoding PEM data...
	I0816 10:41:57.010366    5549 main.go:141] libmachine: Parsing certificate...
	I0816 10:41:57.010796    5549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:41:57.166905    5549 main.go:141] libmachine: Creating SSH key...
	I0816 10:41:57.277349    5549 main.go:141] libmachine: Creating Disk image...
	I0816 10:41:57.277357    5549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:41:57.277576    5549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2
	I0816 10:41:57.287023    5549 main.go:141] libmachine: STDOUT: 
	I0816 10:41:57.287050    5549 main.go:141] libmachine: STDERR: 
	I0816 10:41:57.287095    5549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2 +20000M
	I0816 10:41:57.295137    5549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:41:57.295152    5549 main.go:141] libmachine: STDERR: 
	I0816 10:41:57.295175    5549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2
	I0816 10:41:57.295179    5549 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:41:57.295193    5549 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:41:57.295219    5549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f9:39:a4:df:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2
	I0816 10:41:57.296870    5549 main.go:141] libmachine: STDOUT: 
	I0816 10:41:57.296886    5549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:41:57.296910    5549 client.go:171] duration metric: took 286.741458ms to LocalClient.Create
	I0816 10:41:59.299115    5549 start.go:128] duration metric: took 2.311736917s to createHost
	I0816 10:41:59.299181    5549 start.go:83] releasing machines lock for "flannel-122000", held for 2.311865666s
	W0816 10:41:59.299264    5549 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:41:59.306675    5549 out.go:177] * Deleting "flannel-122000" in qemu2 ...
	W0816 10:41:59.340026    5549 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:41:59.340057    5549 start.go:729] Will try again in 5 seconds ...
	I0816 10:42:04.342216    5549 start.go:360] acquireMachinesLock for flannel-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:04.342781    5549 start.go:364] duration metric: took 452.833µs to acquireMachinesLock for "flannel-122000"
	I0816 10:42:04.342842    5549 start.go:93] Provisioning new machine with config: &{Name:flannel-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:04.343053    5549 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:04.351754    5549 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:04.403083    5549 start.go:159] libmachine.API.Create for "flannel-122000" (driver="qemu2")
	I0816 10:42:04.403141    5549 client.go:168] LocalClient.Create starting
	I0816 10:42:04.403260    5549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:04.403337    5549 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:04.403354    5549 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:04.403410    5549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:04.403455    5549 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:04.403475    5549 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:04.404023    5549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:04.568408    5549 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:04.698792    5549 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:04.698801    5549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:04.698993    5549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2
	I0816 10:42:04.708531    5549 main.go:141] libmachine: STDOUT: 
	I0816 10:42:04.708564    5549 main.go:141] libmachine: STDERR: 
	I0816 10:42:04.708618    5549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2 +20000M
	I0816 10:42:04.716499    5549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:04.716517    5549 main.go:141] libmachine: STDERR: 
	I0816 10:42:04.716529    5549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2
	I0816 10:42:04.716540    5549 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:04.716552    5549 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:04.716598    5549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:35:1e:b3:84:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/flannel-122000/disk.qcow2
	I0816 10:42:04.718240    5549 main.go:141] libmachine: STDOUT: 
	I0816 10:42:04.718256    5549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:04.718269    5549 client.go:171] duration metric: took 315.128792ms to LocalClient.Create
	I0816 10:42:06.720307    5549 start.go:128] duration metric: took 2.377288167s to createHost
	I0816 10:42:06.720331    5549 start.go:83] releasing machines lock for "flannel-122000", held for 2.377586833s
	W0816 10:42:06.720486    5549 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:06.737787    5549 out.go:201] 
	W0816 10:42:06.742817    5549 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:42:06.742826    5549 out.go:270] * 
	* 
	W0816 10:42:06.743627    5549 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:42:06.757613    5549 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.007304417s)

                                                
                                                
-- stdout --
	* [kindnet-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-122000" primary control-plane node in "kindnet-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:42:09.063654    5670 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:42:09.063789    5670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:09.063795    5670 out.go:358] Setting ErrFile to fd 2...
	I0816 10:42:09.063798    5670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:09.063928    5670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:42:09.065089    5670 out.go:352] Setting JSON to false
	I0816 10:42:09.081620    5670 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4292,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:42:09.081694    5670 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:42:09.088215    5670 out.go:177] * [kindnet-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:42:09.096115    5670 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:42:09.096196    5670 notify.go:220] Checking for updates...
	I0816 10:42:09.100636    5670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:42:09.104076    5670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:42:09.107097    5670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:42:09.110127    5670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:42:09.113119    5670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:42:09.116413    5670 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:42:09.116484    5670 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:42:09.116536    5670 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:42:09.121150    5670 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:42:09.128135    5670 start.go:297] selected driver: qemu2
	I0816 10:42:09.128143    5670 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:42:09.128150    5670 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:42:09.130381    5670 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:42:09.134152    5670 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:42:09.137184    5670 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:42:09.137201    5670 cni.go:84] Creating CNI manager for "kindnet"
	I0816 10:42:09.137204    5670 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 10:42:09.137237    5670 start.go:340] cluster config:
	{Name:kindnet-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:42:09.140630    5670 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:42:09.147954    5670 out.go:177] * Starting "kindnet-122000" primary control-plane node in "kindnet-122000" cluster
	I0816 10:42:09.152127    5670 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:42:09.152141    5670 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:42:09.152150    5670 cache.go:56] Caching tarball of preloaded images
	I0816 10:42:09.152208    5670 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:42:09.152213    5670 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:42:09.152267    5670 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/kindnet-122000/config.json ...
	I0816 10:42:09.152276    5670 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/kindnet-122000/config.json: {Name:mk4f0fb0af8cc3360c87ec2081dd48b3431e037d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:42:09.152655    5670 start.go:360] acquireMachinesLock for kindnet-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:09.152685    5670 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "kindnet-122000"
	I0816 10:42:09.152696    5670 start.go:93] Provisioning new machine with config: &{Name:kindnet-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:09.152723    5670 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:09.156068    5670 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:09.171157    5670 start.go:159] libmachine.API.Create for "kindnet-122000" (driver="qemu2")
	I0816 10:42:09.171178    5670 client.go:168] LocalClient.Create starting
	I0816 10:42:09.171241    5670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:09.171275    5670 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:09.171288    5670 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:09.171322    5670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:09.171345    5670 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:09.171358    5670 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:09.171842    5670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:09.328864    5670 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:09.495209    5670 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:09.495220    5670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:09.495409    5670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2
	I0816 10:42:09.504698    5670 main.go:141] libmachine: STDOUT: 
	I0816 10:42:09.504723    5670 main.go:141] libmachine: STDERR: 
	I0816 10:42:09.504766    5670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2 +20000M
	I0816 10:42:09.513390    5670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:09.513411    5670 main.go:141] libmachine: STDERR: 
	I0816 10:42:09.513443    5670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2
	I0816 10:42:09.513448    5670 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:09.513463    5670 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:09.513491    5670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:9c:e2:d5:7f:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2
	I0816 10:42:09.515386    5670 main.go:141] libmachine: STDOUT: 
	I0816 10:42:09.515410    5670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:09.515433    5670 client.go:171] duration metric: took 344.259041ms to LocalClient.Create
	I0816 10:42:11.515769    5670 start.go:128] duration metric: took 2.363066167s to createHost
	I0816 10:42:11.515815    5670 start.go:83] releasing machines lock for "kindnet-122000", held for 2.36318s
	W0816 10:42:11.515866    5670 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:11.526608    5670 out.go:177] * Deleting "kindnet-122000" in qemu2 ...
	W0816 10:42:11.548574    5670 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:11.548589    5670 start.go:729] Will try again in 5 seconds ...
	I0816 10:42:16.550569    5670 start.go:360] acquireMachinesLock for kindnet-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:16.550767    5670 start.go:364] duration metric: took 154.25µs to acquireMachinesLock for "kindnet-122000"
	I0816 10:42:16.550815    5670 start.go:93] Provisioning new machine with config: &{Name:kindnet-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:16.550922    5670 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:16.559572    5670 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:16.581243    5670 start.go:159] libmachine.API.Create for "kindnet-122000" (driver="qemu2")
	I0816 10:42:16.581283    5670 client.go:168] LocalClient.Create starting
	I0816 10:42:16.581359    5670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:16.581414    5670 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:16.581427    5670 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:16.581463    5670 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:16.581489    5670 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:16.581497    5670 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:16.581892    5670 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:16.736185    5670 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:16.986893    5670 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:16.986905    5670 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:16.987124    5670 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2
	I0816 10:42:16.996820    5670 main.go:141] libmachine: STDOUT: 
	I0816 10:42:16.996844    5670 main.go:141] libmachine: STDERR: 
	I0816 10:42:16.996894    5670 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2 +20000M
	I0816 10:42:17.004934    5670 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:17.004949    5670 main.go:141] libmachine: STDERR: 
	I0816 10:42:17.004963    5670 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2
	I0816 10:42:17.004968    5670 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:17.004978    5670 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:17.005020    5670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:c1:fe:ba:6c:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kindnet-122000/disk.qcow2
	I0816 10:42:17.006727    5670 main.go:141] libmachine: STDOUT: 
	I0816 10:42:17.006742    5670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:17.006754    5670 client.go:171] duration metric: took 425.473833ms to LocalClient.Create
	I0816 10:42:19.008789    5670 start.go:128] duration metric: took 2.457919291s to createHost
	I0816 10:42:19.008813    5670 start.go:83] releasing machines lock for "kindnet-122000", held for 2.458096708s
	W0816 10:42:19.008892    5670 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:19.018128    5670 out.go:201] 
	W0816 10:42:19.022067    5670 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:42:19.022087    5670 out.go:270] * 
	* 
	W0816 10:42:19.022570    5670 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:42:19.034094    5670 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.083138334s)

                                                
                                                
-- stdout --
	* [enable-default-cni-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-122000" primary control-plane node in "enable-default-cni-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:42:21.330346    5786 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:42:21.330473    5786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:21.330475    5786 out.go:358] Setting ErrFile to fd 2...
	I0816 10:42:21.330478    5786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:21.330603    5786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:42:21.331767    5786 out.go:352] Setting JSON to false
	I0816 10:42:21.348049    5786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4304,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:42:21.348122    5786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:42:21.353481    5786 out.go:177] * [enable-default-cni-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:42:21.362366    5786 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:42:21.362418    5786 notify.go:220] Checking for updates...
	I0816 10:42:21.369277    5786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:42:21.372281    5786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:42:21.375338    5786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:42:21.376713    5786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:42:21.380321    5786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:42:21.383699    5786 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:42:21.383769    5786 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:42:21.383832    5786 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:42:21.388209    5786 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:42:21.403398    5786 start.go:297] selected driver: qemu2
	I0816 10:42:21.403406    5786 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:42:21.403412    5786 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:42:21.405633    5786 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:42:21.408372    5786 out.go:177] * Automatically selected the socket_vmnet network
	E0816 10:42:21.411435    5786 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0816 10:42:21.411481    5786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:42:21.411510    5786 cni.go:84] Creating CNI manager for "bridge"
	I0816 10:42:21.411515    5786 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:42:21.411545    5786 start.go:340] cluster config:
	{Name:enable-default-cni-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:42:21.415354    5786 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:42:21.422321    5786 out.go:177] * Starting "enable-default-cni-122000" primary control-plane node in "enable-default-cni-122000" cluster
	I0816 10:42:21.426335    5786 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:42:21.426353    5786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:42:21.426366    5786 cache.go:56] Caching tarball of preloaded images
	I0816 10:42:21.426440    5786 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:42:21.426454    5786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:42:21.426527    5786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/enable-default-cni-122000/config.json ...
	I0816 10:42:21.426544    5786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/enable-default-cni-122000/config.json: {Name:mk3d761cedf9e70d7f9db3d8a1f77d929e09b236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:42:21.426857    5786 start.go:360] acquireMachinesLock for enable-default-cni-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:21.426900    5786 start.go:364] duration metric: took 27.916µs to acquireMachinesLock for "enable-default-cni-122000"
	I0816 10:42:21.426913    5786 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:21.426938    5786 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:21.434356    5786 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:21.452240    5786 start.go:159] libmachine.API.Create for "enable-default-cni-122000" (driver="qemu2")
	I0816 10:42:21.452267    5786 client.go:168] LocalClient.Create starting
	I0816 10:42:21.452330    5786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:21.452372    5786 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:21.452382    5786 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:21.452419    5786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:21.452442    5786 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:21.452450    5786 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:21.452917    5786 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:21.609839    5786 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:21.778539    5786 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:21.778548    5786 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:21.778754    5786 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2
	I0816 10:42:21.788106    5786 main.go:141] libmachine: STDOUT: 
	I0816 10:42:21.788125    5786 main.go:141] libmachine: STDERR: 
	I0816 10:42:21.788173    5786 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2 +20000M
	I0816 10:42:21.796041    5786 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:21.796057    5786 main.go:141] libmachine: STDERR: 
	I0816 10:42:21.796079    5786 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2
	I0816 10:42:21.796085    5786 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:21.796097    5786 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:21.796122    5786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:2c:45:08:dc:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2
	I0816 10:42:21.797669    5786 main.go:141] libmachine: STDOUT: 
	I0816 10:42:21.797683    5786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:21.797704    5786 client.go:171] duration metric: took 345.438167ms to LocalClient.Create
	I0816 10:42:23.799878    5786 start.go:128] duration metric: took 2.372964916s to createHost
	I0816 10:42:23.799972    5786 start.go:83] releasing machines lock for "enable-default-cni-122000", held for 2.373117417s
	W0816 10:42:23.800068    5786 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:23.807362    5786 out.go:177] * Deleting "enable-default-cni-122000" in qemu2 ...
	W0816 10:42:23.839575    5786 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:23.839607    5786 start.go:729] Will try again in 5 seconds ...
	I0816 10:42:28.841805    5786 start.go:360] acquireMachinesLock for enable-default-cni-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:28.842515    5786 start.go:364] duration metric: took 524µs to acquireMachinesLock for "enable-default-cni-122000"
	I0816 10:42:28.842688    5786 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:28.842993    5786 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:28.852509    5786 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:28.905223    5786 start.go:159] libmachine.API.Create for "enable-default-cni-122000" (driver="qemu2")
	I0816 10:42:28.905271    5786 client.go:168] LocalClient.Create starting
	I0816 10:42:28.905385    5786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:28.905461    5786 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:28.905479    5786 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:28.905541    5786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:28.905586    5786 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:28.905595    5786 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:28.906174    5786 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:29.074925    5786 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:29.311612    5786 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:29.311623    5786 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:29.311861    5786 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2
	I0816 10:42:29.321754    5786 main.go:141] libmachine: STDOUT: 
	I0816 10:42:29.321776    5786 main.go:141] libmachine: STDERR: 
	I0816 10:42:29.322427    5786 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2 +20000M
	I0816 10:42:29.331542    5786 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:29.331560    5786 main.go:141] libmachine: STDERR: 
	I0816 10:42:29.331592    5786 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2
	I0816 10:42:29.331596    5786 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:29.331602    5786 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:29.331632    5786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:f2:af:f7:8d:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/enable-default-cni-122000/disk.qcow2
	I0816 10:42:29.333385    5786 main.go:141] libmachine: STDOUT: 
	I0816 10:42:29.333399    5786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:29.333410    5786 client.go:171] duration metric: took 428.141667ms to LocalClient.Create
	I0816 10:42:31.333902    5786 start.go:128] duration metric: took 2.490895875s to createHost
	I0816 10:42:31.333983    5786 start.go:83] releasing machines lock for "enable-default-cni-122000", held for 2.491495583s
	W0816 10:42:31.334281    5786 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:31.350600    5786 out.go:201] 
	W0816 10:42:31.358759    5786 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:42:31.358805    5786 out.go:270] * 
	* 
	W0816 10:42:31.361404    5786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:42:31.376588    5786 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.882890333s)

                                                
                                                
-- stdout --
	* [bridge-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-122000" primary control-plane node in "bridge-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:42:33.612147    5899 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:42:33.612275    5899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:33.612278    5899 out.go:358] Setting ErrFile to fd 2...
	I0816 10:42:33.612281    5899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:33.612416    5899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:42:33.613521    5899 out.go:352] Setting JSON to false
	I0816 10:42:33.630482    5899 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4316,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:42:33.630560    5899 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:42:33.638390    5899 out.go:177] * [bridge-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:42:33.646365    5899 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:42:33.646386    5899 notify.go:220] Checking for updates...
	I0816 10:42:33.652357    5899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:42:33.655371    5899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:42:33.658320    5899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:42:33.661373    5899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:42:33.664305    5899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:42:33.667626    5899 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:42:33.667695    5899 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:42:33.667743    5899 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:42:33.672414    5899 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:42:33.679363    5899 start.go:297] selected driver: qemu2
	I0816 10:42:33.679372    5899 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:42:33.679379    5899 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:42:33.681728    5899 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:42:33.684343    5899 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:42:33.685897    5899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:42:33.685921    5899 cni.go:84] Creating CNI manager for "bridge"
	I0816 10:42:33.685926    5899 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:42:33.685973    5899 start.go:340] cluster config:
	{Name:bridge-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:42:33.689447    5899 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:42:33.697394    5899 out.go:177] * Starting "bridge-122000" primary control-plane node in "bridge-122000" cluster
	I0816 10:42:33.701339    5899 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:42:33.701355    5899 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:42:33.701369    5899 cache.go:56] Caching tarball of preloaded images
	I0816 10:42:33.701424    5899 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:42:33.701430    5899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:42:33.701505    5899 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/bridge-122000/config.json ...
	I0816 10:42:33.701515    5899 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/bridge-122000/config.json: {Name:mk7c4e12c9e0813de31fed2d82910f9c70ee0477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:42:33.701897    5899 start.go:360] acquireMachinesLock for bridge-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:33.701927    5899 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "bridge-122000"
	I0816 10:42:33.701938    5899 start.go:93] Provisioning new machine with config: &{Name:bridge-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:33.701967    5899 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:33.710351    5899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:33.725661    5899 start.go:159] libmachine.API.Create for "bridge-122000" (driver="qemu2")
	I0816 10:42:33.725694    5899 client.go:168] LocalClient.Create starting
	I0816 10:42:33.725763    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:33.725792    5899 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:33.725801    5899 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:33.725848    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:33.725871    5899 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:33.725880    5899 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:33.726285    5899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:33.882664    5899 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:33.942826    5899 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:33.942831    5899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:33.943018    5899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2
	I0816 10:42:33.952253    5899 main.go:141] libmachine: STDOUT: 
	I0816 10:42:33.952269    5899 main.go:141] libmachine: STDERR: 
	I0816 10:42:33.952308    5899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2 +20000M
	I0816 10:42:33.960394    5899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:33.960410    5899 main.go:141] libmachine: STDERR: 
	I0816 10:42:33.960428    5899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2
	I0816 10:42:33.960433    5899 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:33.960445    5899 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:33.960475    5899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:19:89:c7:b7:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2
	I0816 10:42:33.962103    5899 main.go:141] libmachine: STDOUT: 
	I0816 10:42:33.962118    5899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:33.962137    5899 client.go:171] duration metric: took 236.44125ms to LocalClient.Create
	I0816 10:42:35.964260    5899 start.go:128] duration metric: took 2.262330541s to createHost
	I0816 10:42:35.964311    5899 start.go:83] releasing machines lock for "bridge-122000", held for 2.262428416s
	W0816 10:42:35.964416    5899 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:35.979851    5899 out.go:177] * Deleting "bridge-122000" in qemu2 ...
	W0816 10:42:35.997161    5899 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:35.997172    5899 start.go:729] Will try again in 5 seconds ...
	I0816 10:42:40.997406    5899 start.go:360] acquireMachinesLock for bridge-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:40.997848    5899 start.go:364] duration metric: took 360.584µs to acquireMachinesLock for "bridge-122000"
	I0816 10:42:40.997959    5899 start.go:93] Provisioning new machine with config: &{Name:bridge-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:40.998195    5899 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:41.008989    5899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:41.054611    5899 start.go:159] libmachine.API.Create for "bridge-122000" (driver="qemu2")
	I0816 10:42:41.054669    5899 client.go:168] LocalClient.Create starting
	I0816 10:42:41.054782    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:41.054864    5899 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:41.054878    5899 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:41.054941    5899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:41.054986    5899 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:41.054997    5899 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:41.055505    5899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:41.234582    5899 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:41.405451    5899 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:41.405461    5899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:41.405664    5899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2
	I0816 10:42:41.415481    5899 main.go:141] libmachine: STDOUT: 
	I0816 10:42:41.415499    5899 main.go:141] libmachine: STDERR: 
	I0816 10:42:41.415565    5899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2 +20000M
	I0816 10:42:41.423792    5899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:41.423807    5899 main.go:141] libmachine: STDERR: 
	I0816 10:42:41.423820    5899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2
	I0816 10:42:41.423825    5899 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:41.423834    5899 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:41.423866    5899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:77:58:c5:7c:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/bridge-122000/disk.qcow2
	I0816 10:42:41.425486    5899 main.go:141] libmachine: STDOUT: 
	I0816 10:42:41.425499    5899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:41.425512    5899 client.go:171] duration metric: took 370.84625ms to LocalClient.Create
	I0816 10:42:43.427669    5899 start.go:128] duration metric: took 2.4294835s to createHost
	I0816 10:42:43.427748    5899 start.go:83] releasing machines lock for "bridge-122000", held for 2.429926083s
	W0816 10:42:43.428110    5899 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:43.439771    5899 out.go:201] 
	W0816 10:42:43.442808    5899 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:42:43.442835    5899 out.go:270] * 
	* 
	W0816 10:42:43.445447    5899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:42:43.452787    5899 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.760694333s)

                                                
                                                
-- stdout --
	* [kubenet-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-122000" primary control-plane node in "kubenet-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:42:45.664348    6014 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:42:45.664482    6014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:45.664485    6014 out.go:358] Setting ErrFile to fd 2...
	I0816 10:42:45.664487    6014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:45.664616    6014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:42:45.665685    6014 out.go:352] Setting JSON to false
	I0816 10:42:45.682116    6014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4328,"bootTime":1723825837,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:42:45.682183    6014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:42:45.689403    6014 out.go:177] * [kubenet-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:42:45.697440    6014 notify.go:220] Checking for updates...
	I0816 10:42:45.702339    6014 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:42:45.710238    6014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:42:45.716379    6014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:42:45.723408    6014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:42:45.731438    6014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:42:45.741396    6014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:42:45.746882    6014 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:42:45.746966    6014 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:42:45.747014    6014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:42:45.751435    6014 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:42:45.756429    6014 start.go:297] selected driver: qemu2
	I0816 10:42:45.756436    6014 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:42:45.756444    6014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:42:45.759166    6014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:42:45.763427    6014 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:42:45.768503    6014 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:42:45.768524    6014 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0816 10:42:45.768569    6014 start.go:340] cluster config:
	{Name:kubenet-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:42:45.773105    6014 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:42:45.779306    6014 out.go:177] * Starting "kubenet-122000" primary control-plane node in "kubenet-122000" cluster
	I0816 10:42:45.783379    6014 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:42:45.783410    6014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:42:45.783424    6014 cache.go:56] Caching tarball of preloaded images
	I0816 10:42:45.783520    6014 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:42:45.783527    6014 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:42:45.783626    6014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/kubenet-122000/config.json ...
	I0816 10:42:45.783640    6014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/kubenet-122000/config.json: {Name:mk9e09ee043afc8029e3802de3c621b4e54f7a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:42:45.784089    6014 start.go:360] acquireMachinesLock for kubenet-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:45.784128    6014 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "kubenet-122000"
	I0816 10:42:45.784143    6014 start.go:93] Provisioning new machine with config: &{Name:kubenet-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:45.784186    6014 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:45.788454    6014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:45.808268    6014 start.go:159] libmachine.API.Create for "kubenet-122000" (driver="qemu2")
	I0816 10:42:45.808297    6014 client.go:168] LocalClient.Create starting
	I0816 10:42:45.808362    6014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:45.808396    6014 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:45.808405    6014 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:45.808448    6014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:45.808480    6014 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:45.808489    6014 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:45.808936    6014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:45.965616    6014 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:46.039576    6014 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:46.039582    6014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:46.039747    6014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2
	I0816 10:42:46.049019    6014 main.go:141] libmachine: STDOUT: 
	I0816 10:42:46.049053    6014 main.go:141] libmachine: STDERR: 
	I0816 10:42:46.049108    6014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2 +20000M
	I0816 10:42:46.057129    6014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:46.057146    6014 main.go:141] libmachine: STDERR: 
	I0816 10:42:46.057161    6014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2
	I0816 10:42:46.057169    6014 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:46.057188    6014 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:46.057222    6014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:be:27:c6:4a:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2
	I0816 10:42:46.058936    6014 main.go:141] libmachine: STDOUT: 
	I0816 10:42:46.058954    6014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:46.058977    6014 client.go:171] duration metric: took 250.679959ms to LocalClient.Create
	I0816 10:42:48.061127    6014 start.go:128] duration metric: took 2.276960083s to createHost
	I0816 10:42:48.061228    6014 start.go:83] releasing machines lock for "kubenet-122000", held for 2.277142958s
	W0816 10:42:48.061358    6014 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:48.068708    6014 out.go:177] * Deleting "kubenet-122000" in qemu2 ...
	W0816 10:42:48.103863    6014 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:48.103890    6014 start.go:729] Will try again in 5 seconds ...
	I0816 10:42:53.105943    6014 start.go:360] acquireMachinesLock for kubenet-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:53.106223    6014 start.go:364] duration metric: took 222.792µs to acquireMachinesLock for "kubenet-122000"
	I0816 10:42:53.106280    6014 start.go:93] Provisioning new machine with config: &{Name:kubenet-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:53.106378    6014 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:53.114663    6014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:53.138584    6014 start.go:159] libmachine.API.Create for "kubenet-122000" (driver="qemu2")
	I0816 10:42:53.138637    6014 client.go:168] LocalClient.Create starting
	I0816 10:42:53.138706    6014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:53.138744    6014 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:53.138755    6014 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:53.138796    6014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:53.138823    6014 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:53.138833    6014 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:53.139188    6014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:53.299843    6014 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:53.331518    6014 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:53.331523    6014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:53.331701    6014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2
	I0816 10:42:53.341006    6014 main.go:141] libmachine: STDOUT: 
	I0816 10:42:53.341026    6014 main.go:141] libmachine: STDERR: 
	I0816 10:42:53.341071    6014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2 +20000M
	I0816 10:42:53.349079    6014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:53.349097    6014 main.go:141] libmachine: STDERR: 
	I0816 10:42:53.349108    6014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2
	I0816 10:42:53.349111    6014 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:53.349124    6014 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:53.349157    6014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:34:3f:77:ed:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/kubenet-122000/disk.qcow2
	I0816 10:42:53.350913    6014 main.go:141] libmachine: STDOUT: 
	I0816 10:42:53.350929    6014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:53.350941    6014 client.go:171] duration metric: took 212.304875ms to LocalClient.Create
	I0816 10:42:55.353117    6014 start.go:128] duration metric: took 2.246754708s to createHost
	I0816 10:42:55.353238    6014 start.go:83] releasing machines lock for "kubenet-122000", held for 2.247041833s
	W0816 10:42:55.353659    6014 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:55.368287    6014 out.go:201] 
	W0816 10:42:55.371366    6014 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:42:55.371391    6014 out.go:270] * 
	* 
	W0816 10:42:55.373851    6014 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:42:55.384358    6014 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.760330209s)

                                                
                                                
-- stdout --
	* [custom-flannel-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-122000" primary control-plane node in "custom-flannel-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:42:57.601237    6128 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:42:57.601359    6128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:57.601362    6128 out.go:358] Setting ErrFile to fd 2...
	I0816 10:42:57.601365    6128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:42:57.601494    6128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:42:57.602589    6128 out.go:352] Setting JSON to false
	I0816 10:42:57.619437    6128 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4340,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:42:57.619508    6128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:42:57.624848    6128 out.go:177] * [custom-flannel-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:42:57.632824    6128 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:42:57.632854    6128 notify.go:220] Checking for updates...
	I0816 10:42:57.638761    6128 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:42:57.641733    6128 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:42:57.644690    6128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:42:57.647681    6128 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:42:57.650751    6128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:42:57.654125    6128 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:42:57.654195    6128 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:42:57.654242    6128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:42:57.658708    6128 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:42:57.665717    6128 start.go:297] selected driver: qemu2
	I0816 10:42:57.665724    6128 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:42:57.665734    6128 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:42:57.667926    6128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:42:57.670749    6128 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:42:57.673793    6128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:42:57.673818    6128 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0816 10:42:57.673824    6128 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0816 10:42:57.673856    6128 start.go:340] cluster config:
	{Name:custom-flannel-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:42:57.677217    6128 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:42:57.684778    6128 out.go:177] * Starting "custom-flannel-122000" primary control-plane node in "custom-flannel-122000" cluster
	I0816 10:42:57.688722    6128 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:42:57.688736    6128 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:42:57.688745    6128 cache.go:56] Caching tarball of preloaded images
	I0816 10:42:57.688797    6128 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:42:57.688802    6128 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:42:57.688866    6128 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/custom-flannel-122000/config.json ...
	I0816 10:42:57.688876    6128 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/custom-flannel-122000/config.json: {Name:mk7417b79acf2d7157f44fa405003e7fbcce3e89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:42:57.689150    6128 start.go:360] acquireMachinesLock for custom-flannel-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:42:57.689185    6128 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "custom-flannel-122000"
	I0816 10:42:57.689196    6128 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:42:57.689220    6128 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:42:57.693753    6128 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:42:57.708818    6128 start.go:159] libmachine.API.Create for "custom-flannel-122000" (driver="qemu2")
	I0816 10:42:57.708841    6128 client.go:168] LocalClient.Create starting
	I0816 10:42:57.708899    6128 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:42:57.708931    6128 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:57.708939    6128 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:57.708985    6128 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:42:57.709007    6128 main.go:141] libmachine: Decoding PEM data...
	I0816 10:42:57.709014    6128 main.go:141] libmachine: Parsing certificate...
	I0816 10:42:57.709475    6128 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:42:57.865354    6128 main.go:141] libmachine: Creating SSH key...
	I0816 10:42:57.911950    6128 main.go:141] libmachine: Creating Disk image...
	I0816 10:42:57.911960    6128 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:42:57.912153    6128 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2
	I0816 10:42:57.921740    6128 main.go:141] libmachine: STDOUT: 
	I0816 10:42:57.921757    6128 main.go:141] libmachine: STDERR: 
	I0816 10:42:57.921811    6128 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2 +20000M
	I0816 10:42:57.930209    6128 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:42:57.930234    6128 main.go:141] libmachine: STDERR: 
	I0816 10:42:57.930251    6128 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2
	I0816 10:42:57.930255    6128 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:42:57.930267    6128 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:42:57.930295    6128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a3:c5:62:ab:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2
	I0816 10:42:57.931997    6128 main.go:141] libmachine: STDOUT: 
	I0816 10:42:57.932011    6128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:42:57.932030    6128 client.go:171] duration metric: took 223.189958ms to LocalClient.Create
	I0816 10:42:59.934195    6128 start.go:128] duration metric: took 2.244996958s to createHost
	I0816 10:42:59.934268    6128 start.go:83] releasing machines lock for "custom-flannel-122000", held for 2.245126834s
	W0816 10:42:59.934353    6128 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:59.940856    6128 out.go:177] * Deleting "custom-flannel-122000" in qemu2 ...
	W0816 10:42:59.979750    6128 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:42:59.979784    6128 start.go:729] Will try again in 5 seconds ...
	I0816 10:43:04.981870    6128 start.go:360] acquireMachinesLock for custom-flannel-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:04.982505    6128 start.go:364] duration metric: took 506.709µs to acquireMachinesLock for "custom-flannel-122000"
	I0816 10:43:04.982686    6128 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:04.982994    6128 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:04.988880    6128 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:43:05.036295    6128 start.go:159] libmachine.API.Create for "custom-flannel-122000" (driver="qemu2")
	I0816 10:43:05.036350    6128 client.go:168] LocalClient.Create starting
	I0816 10:43:05.036469    6128 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:05.036544    6128 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:05.036562    6128 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:05.036618    6128 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:05.036666    6128 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:05.036678    6128 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:05.037349    6128 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:05.195626    6128 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:05.269040    6128 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:05.269047    6128 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:05.269231    6128 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2
	I0816 10:43:05.278549    6128 main.go:141] libmachine: STDOUT: 
	I0816 10:43:05.278569    6128 main.go:141] libmachine: STDERR: 
	I0816 10:43:05.278614    6128 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2 +20000M
	I0816 10:43:05.286676    6128 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:05.286693    6128 main.go:141] libmachine: STDERR: 
	I0816 10:43:05.286704    6128 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2
	I0816 10:43:05.286708    6128 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:05.286721    6128 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:05.286746    6128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:1d:05:77:f1:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/custom-flannel-122000/disk.qcow2
	I0816 10:43:05.288427    6128 main.go:141] libmachine: STDOUT: 
	I0816 10:43:05.288441    6128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:05.288454    6128 client.go:171] duration metric: took 252.101083ms to LocalClient.Create
	I0816 10:43:07.290370    6128 start.go:128] duration metric: took 2.307407875s to createHost
	I0816 10:43:07.290420    6128 start.go:83] releasing machines lock for "custom-flannel-122000", held for 2.307938584s
	W0816 10:43:07.290679    6128 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:07.304129    6128 out.go:201] 
	W0816 10:43:07.307180    6128 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:43:07.307194    6128 out.go:270] * 
	* 
	W0816 10:43:07.308406    6128 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:43:07.319073    6128 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.804920834s)

                                                
                                                
-- stdout --
	* [calico-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-122000" primary control-plane node in "calico-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:43:09.709390    6249 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:43:09.709518    6249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:09.709520    6249 out.go:358] Setting ErrFile to fd 2...
	I0816 10:43:09.709523    6249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:09.709648    6249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:43:09.710781    6249 out.go:352] Setting JSON to false
	I0816 10:43:09.726938    6249 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4352,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:43:09.727021    6249 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:43:09.732859    6249 out.go:177] * [calico-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:43:09.739848    6249 notify.go:220] Checking for updates...
	I0816 10:43:09.739858    6249 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:43:09.744719    6249 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:43:09.747689    6249 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:43:09.750747    6249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:43:09.753705    6249 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:43:09.756696    6249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:43:09.760108    6249 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:43:09.760176    6249 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:43:09.760223    6249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:43:09.764696    6249 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:43:09.771731    6249 start.go:297] selected driver: qemu2
	I0816 10:43:09.771740    6249 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:43:09.771748    6249 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:43:09.773980    6249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:43:09.777746    6249 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:43:09.780819    6249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:43:09.780871    6249 cni.go:84] Creating CNI manager for "calico"
	I0816 10:43:09.780876    6249 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0816 10:43:09.780911    6249 start.go:340] cluster config:
	{Name:calico-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:43:09.784614    6249 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:09.791671    6249 out.go:177] * Starting "calico-122000" primary control-plane node in "calico-122000" cluster
	I0816 10:43:09.795721    6249 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:43:09.795745    6249 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:43:09.795755    6249 cache.go:56] Caching tarball of preloaded images
	I0816 10:43:09.795829    6249 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:43:09.795835    6249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:43:09.795915    6249 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/calico-122000/config.json ...
	I0816 10:43:09.795931    6249 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/calico-122000/config.json: {Name:mk93c24385d2477d4ec3bb912df3372850cd21ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:43:09.796154    6249 start.go:360] acquireMachinesLock for calico-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:09.796187    6249 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "calico-122000"
	I0816 10:43:09.796204    6249 start.go:93] Provisioning new machine with config: &{Name:calico-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:09.796233    6249 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:09.804749    6249 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:43:09.821046    6249 start.go:159] libmachine.API.Create for "calico-122000" (driver="qemu2")
	I0816 10:43:09.821075    6249 client.go:168] LocalClient.Create starting
	I0816 10:43:09.821131    6249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:09.821163    6249 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:09.821172    6249 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:09.821229    6249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:09.821251    6249 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:09.821258    6249 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:09.821617    6249 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:09.978881    6249 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:10.078383    6249 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:10.078390    6249 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:10.078586    6249 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2
	I0816 10:43:10.088531    6249 main.go:141] libmachine: STDOUT: 
	I0816 10:43:10.088562    6249 main.go:141] libmachine: STDERR: 
	I0816 10:43:10.088617    6249 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2 +20000M
	I0816 10:43:10.096772    6249 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:10.096788    6249 main.go:141] libmachine: STDERR: 
	I0816 10:43:10.096801    6249 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2
	I0816 10:43:10.096810    6249 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:10.096824    6249 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:10.096853    6249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:cd:7c:6c:c8:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2
	I0816 10:43:10.098587    6249 main.go:141] libmachine: STDOUT: 
	I0816 10:43:10.098602    6249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:10.098622    6249 client.go:171] duration metric: took 277.550459ms to LocalClient.Create
	I0816 10:43:12.100775    6249 start.go:128] duration metric: took 2.30457375s to createHost
	I0816 10:43:12.100829    6249 start.go:83] releasing machines lock for "calico-122000", held for 2.304687583s
	W0816 10:43:12.100898    6249 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:12.114741    6249 out.go:177] * Deleting "calico-122000" in qemu2 ...
	W0816 10:43:12.138031    6249 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:12.138049    6249 start.go:729] Will try again in 5 seconds ...
	I0816 10:43:17.140158    6249 start.go:360] acquireMachinesLock for calico-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:17.140606    6249 start.go:364] duration metric: took 322.125µs to acquireMachinesLock for "calico-122000"
	I0816 10:43:17.140674    6249 start.go:93] Provisioning new machine with config: &{Name:calico-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:17.140919    6249 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:17.149528    6249 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:43:17.185857    6249 start.go:159] libmachine.API.Create for "calico-122000" (driver="qemu2")
	I0816 10:43:17.185912    6249 client.go:168] LocalClient.Create starting
	I0816 10:43:17.186018    6249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:17.186073    6249 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:17.186102    6249 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:17.186162    6249 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:17.186207    6249 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:17.186218    6249 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:17.186655    6249 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:17.349233    6249 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:17.420322    6249 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:17.420331    6249 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:17.420528    6249 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2
	I0816 10:43:17.430209    6249 main.go:141] libmachine: STDOUT: 
	I0816 10:43:17.430226    6249 main.go:141] libmachine: STDERR: 
	I0816 10:43:17.430280    6249 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2 +20000M
	I0816 10:43:17.438394    6249 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:17.438409    6249 main.go:141] libmachine: STDERR: 
	I0816 10:43:17.438428    6249 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2
	I0816 10:43:17.438433    6249 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:17.438441    6249 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:17.438469    6249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:5a:df:41:bf:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/calico-122000/disk.qcow2
	I0816 10:43:17.440081    6249 main.go:141] libmachine: STDOUT: 
	I0816 10:43:17.440130    6249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:17.440142    6249 client.go:171] duration metric: took 254.232083ms to LocalClient.Create
	I0816 10:43:19.442314    6249 start.go:128] duration metric: took 2.301413583s to createHost
	I0816 10:43:19.442401    6249 start.go:83] releasing machines lock for "calico-122000", held for 2.301822125s
	W0816 10:43:19.442781    6249 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:19.455422    6249 out.go:201] 
	W0816 10:43:19.459527    6249 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:43:19.459557    6249 out.go:270] * 
	* 
	W0816 10:43:19.462080    6249 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:43:19.471421    6249 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0816 10:43:25.866717    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-122000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.739151292s)

                                                
                                                
-- stdout --
	* [false-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-122000" primary control-plane node in "false-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:43:21.879399    6371 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:43:21.879552    6371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:21.879559    6371 out.go:358] Setting ErrFile to fd 2...
	I0816 10:43:21.879562    6371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:21.879726    6371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:43:21.881130    6371 out.go:352] Setting JSON to false
	I0816 10:43:21.899563    6371 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4364,"bootTime":1723825837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:43:21.899666    6371 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:43:21.905515    6371 out.go:177] * [false-122000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:43:21.915565    6371 notify.go:220] Checking for updates...
	I0816 10:43:21.919494    6371 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:43:21.922504    6371 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:43:21.925444    6371 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:43:21.928495    6371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:43:21.931540    6371 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:43:21.934454    6371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:43:21.937937    6371 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:43:21.938004    6371 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:43:21.938066    6371 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:43:21.942499    6371 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:43:21.951487    6371 start.go:297] selected driver: qemu2
	I0816 10:43:21.951497    6371 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:43:21.951503    6371 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:43:21.953955    6371 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:43:21.957529    6371 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:43:21.960532    6371 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:43:21.960551    6371 cni.go:84] Creating CNI manager for "false"
	I0816 10:43:21.960585    6371 start.go:340] cluster config:
	{Name:false-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:43:21.964714    6371 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:21.971382    6371 out.go:177] * Starting "false-122000" primary control-plane node in "false-122000" cluster
	I0816 10:43:21.975497    6371 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:43:21.975530    6371 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:43:21.975539    6371 cache.go:56] Caching tarball of preloaded images
	I0816 10:43:21.975624    6371 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:43:21.975631    6371 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:43:21.975703    6371 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/false-122000/config.json ...
	I0816 10:43:21.975714    6371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/false-122000/config.json: {Name:mk66c2a3aeb61ab74aa2ab57075009f0e6a2c9ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:43:21.976032    6371 start.go:360] acquireMachinesLock for false-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:21.976066    6371 start.go:364] duration metric: took 28µs to acquireMachinesLock for "false-122000"
	I0816 10:43:21.976078    6371 start.go:93] Provisioning new machine with config: &{Name:false-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:21.976104    6371 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:21.979468    6371 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:43:21.995753    6371 start.go:159] libmachine.API.Create for "false-122000" (driver="qemu2")
	I0816 10:43:21.995786    6371 client.go:168] LocalClient.Create starting
	I0816 10:43:21.995865    6371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:21.995898    6371 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:21.995909    6371 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:21.995952    6371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:21.995975    6371 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:21.995992    6371 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:21.996467    6371 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:22.152142    6371 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:22.230272    6371 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:22.230282    6371 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:22.230488    6371 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2
	I0816 10:43:22.240002    6371 main.go:141] libmachine: STDOUT: 
	I0816 10:43:22.240021    6371 main.go:141] libmachine: STDERR: 
	I0816 10:43:22.240068    6371 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2 +20000M
	I0816 10:43:22.248101    6371 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:22.248115    6371 main.go:141] libmachine: STDERR: 
	I0816 10:43:22.248133    6371 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2
	I0816 10:43:22.248137    6371 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:22.248151    6371 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:22.248183    6371 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:da:fd:fc:ab:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2
	I0816 10:43:22.249796    6371 main.go:141] libmachine: STDOUT: 
	I0816 10:43:22.249810    6371 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:22.249828    6371 client.go:171] duration metric: took 254.044125ms to LocalClient.Create
	I0816 10:43:24.251967    6371 start.go:128] duration metric: took 2.275889959s to createHost
	I0816 10:43:24.252061    6371 start.go:83] releasing machines lock for "false-122000", held for 2.276040625s
	W0816 10:43:24.252109    6371 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:24.256998    6371 out.go:177] * Deleting "false-122000" in qemu2 ...
	W0816 10:43:24.283430    6371 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:24.283452    6371 start.go:729] Will try again in 5 seconds ...
	I0816 10:43:29.285445    6371 start.go:360] acquireMachinesLock for false-122000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:29.285730    6371 start.go:364] duration metric: took 234.333µs to acquireMachinesLock for "false-122000"
	I0816 10:43:29.285793    6371 start.go:93] Provisioning new machine with config: &{Name:false-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:29.285911    6371 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:29.294105    6371 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0816 10:43:29.318562    6371 start.go:159] libmachine.API.Create for "false-122000" (driver="qemu2")
	I0816 10:43:29.318593    6371 client.go:168] LocalClient.Create starting
	I0816 10:43:29.318682    6371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:29.318721    6371 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:29.318731    6371 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:29.318777    6371 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:29.318804    6371 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:29.318813    6371 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:29.319166    6371 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:29.481395    6371 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:29.523146    6371 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:29.523162    6371 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:29.523413    6371 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2
	I0816 10:43:29.533483    6371 main.go:141] libmachine: STDOUT: 
	I0816 10:43:29.533519    6371 main.go:141] libmachine: STDERR: 
	I0816 10:43:29.533601    6371 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2 +20000M
	I0816 10:43:29.543505    6371 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:29.543530    6371 main.go:141] libmachine: STDERR: 
	I0816 10:43:29.543544    6371 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2
	I0816 10:43:29.543552    6371 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:29.543585    6371 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:29.543615    6371 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:29:9f:7d:8c:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/false-122000/disk.qcow2
	I0816 10:43:29.545963    6371 main.go:141] libmachine: STDOUT: 
	I0816 10:43:29.545983    6371 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:29.545997    6371 client.go:171] duration metric: took 227.402375ms to LocalClient.Create
	I0816 10:43:31.548180    6371 start.go:128] duration metric: took 2.262290375s to createHost
	I0816 10:43:31.548278    6371 start.go:83] releasing machines lock for "false-122000", held for 2.262582458s
	W0816 10:43:31.548611    6371 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:31.559333    6371 out.go:201] 
	W0816 10:43:31.563351    6371 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:43:31.563403    6371 out.go:270] * 
	* 
	W0816 10:43:31.566241    6371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:43:31.575147    6371 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.837778542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-782000" primary control-plane node in "old-k8s-version-782000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-782000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:43:33.762728    6486 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:43:33.762861    6486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:33.762865    6486 out.go:358] Setting ErrFile to fd 2...
	I0816 10:43:33.762868    6486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:33.763003    6486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:43:33.764130    6486 out.go:352] Setting JSON to false
	I0816 10:43:33.780926    6486 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4376,"bootTime":1723825837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:43:33.780992    6486 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:43:33.787866    6486 out.go:177] * [old-k8s-version-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:43:33.795776    6486 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:43:33.795810    6486 notify.go:220] Checking for updates...
	I0816 10:43:33.802749    6486 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:43:33.805720    6486 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:43:33.807268    6486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:43:33.810697    6486 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:43:33.813782    6486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:43:33.817030    6486 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:43:33.817104    6486 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:43:33.817172    6486 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:43:33.820692    6486 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:43:33.827714    6486 start.go:297] selected driver: qemu2
	I0816 10:43:33.827720    6486 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:43:33.827726    6486 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:43:33.829953    6486 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:43:33.832769    6486 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:43:33.835873    6486 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:43:33.835901    6486 cni.go:84] Creating CNI manager for ""
	I0816 10:43:33.835907    6486 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 10:43:33.835939    6486 start.go:340] cluster config:
	{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:43:33.839307    6486 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:33.846731    6486 out.go:177] * Starting "old-k8s-version-782000" primary control-plane node in "old-k8s-version-782000" cluster
	I0816 10:43:33.850755    6486 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 10:43:33.850770    6486 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 10:43:33.850779    6486 cache.go:56] Caching tarball of preloaded images
	I0816 10:43:33.850840    6486 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:43:33.850846    6486 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 10:43:33.850926    6486 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/old-k8s-version-782000/config.json ...
	I0816 10:43:33.850938    6486 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/old-k8s-version-782000/config.json: {Name:mkfa237383d6e23a8f2765bf4ea1d9dcccfb6d8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:43:33.851346    6486 start.go:360] acquireMachinesLock for old-k8s-version-782000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:33.851378    6486 start.go:364] duration metric: took 25.666µs to acquireMachinesLock for "old-k8s-version-782000"
	I0816 10:43:33.851390    6486 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:33.851419    6486 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:33.855750    6486 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:43:33.871470    6486 start.go:159] libmachine.API.Create for "old-k8s-version-782000" (driver="qemu2")
	I0816 10:43:33.871493    6486 client.go:168] LocalClient.Create starting
	I0816 10:43:33.871552    6486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:33.871582    6486 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:33.871592    6486 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:33.871629    6486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:33.871653    6486 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:33.871668    6486 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:33.872128    6486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:34.026565    6486 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:34.102568    6486 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:34.102574    6486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:34.102766    6486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:34.112302    6486 main.go:141] libmachine: STDOUT: 
	I0816 10:43:34.112321    6486 main.go:141] libmachine: STDERR: 
	I0816 10:43:34.112383    6486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2 +20000M
	I0816 10:43:34.120430    6486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:34.120446    6486 main.go:141] libmachine: STDERR: 
	I0816 10:43:34.120463    6486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:34.120468    6486 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:34.120482    6486 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:34.120508    6486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:16:63:ca:cb:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:34.122159    6486 main.go:141] libmachine: STDOUT: 
	I0816 10:43:34.122176    6486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:34.122195    6486 client.go:171] duration metric: took 250.704ms to LocalClient.Create
	I0816 10:43:36.124336    6486 start.go:128] duration metric: took 2.272944833s to createHost
	I0816 10:43:36.124398    6486 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 2.273067125s
	W0816 10:43:36.124473    6486 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:36.133976    6486 out.go:177] * Deleting "old-k8s-version-782000" in qemu2 ...
	W0816 10:43:36.162549    6486 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:36.162562    6486 start.go:729] Will try again in 5 seconds ...
	I0816 10:43:41.164705    6486 start.go:360] acquireMachinesLock for old-k8s-version-782000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:41.165258    6486 start.go:364] duration metric: took 415.084µs to acquireMachinesLock for "old-k8s-version-782000"
	I0816 10:43:41.165318    6486 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:41.165626    6486 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:41.176391    6486 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:43:41.226549    6486 start.go:159] libmachine.API.Create for "old-k8s-version-782000" (driver="qemu2")
	I0816 10:43:41.226600    6486 client.go:168] LocalClient.Create starting
	I0816 10:43:41.226719    6486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:41.226793    6486 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:41.226811    6486 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:41.226876    6486 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:41.226922    6486 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:41.226933    6486 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:41.227679    6486 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:41.395298    6486 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:41.510412    6486 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:41.510421    6486 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:41.510604    6486 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:41.519859    6486 main.go:141] libmachine: STDOUT: 
	I0816 10:43:41.519877    6486 main.go:141] libmachine: STDERR: 
	I0816 10:43:41.519921    6486 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2 +20000M
	I0816 10:43:41.528079    6486 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:41.528095    6486 main.go:141] libmachine: STDERR: 
	I0816 10:43:41.528115    6486 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:41.528120    6486 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:41.528134    6486 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:41.528176    6486 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ea:23:f0:b9:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:41.529880    6486 main.go:141] libmachine: STDOUT: 
	I0816 10:43:41.529913    6486 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:41.529928    6486 client.go:171] duration metric: took 303.328292ms to LocalClient.Create
	I0816 10:43:43.531763    6486 start.go:128] duration metric: took 2.366168833s to createHost
	I0816 10:43:43.531800    6486 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 2.36657525s
	W0816 10:43:43.531992    6486 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:43.541027    6486 out.go:201] 
	W0816 10:43:43.549088    6486 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:43:43.549121    6486 out.go:270] * 
	* 
	W0816 10:43:43.550361    6486 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:43:43.562056    6486 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (50.928542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-782000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-782000 create -f testdata/busybox.yaml: exit status 1 (27.989791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-782000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (30.079708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (29.10075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-782000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-782000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-782000 describe deploy/metrics-server -n kube-system: exit status 1 (27.243333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-782000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (29.761958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.186051375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-782000" primary control-plane node in "old-k8s-version-782000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:43:47.177836    6543 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:43:47.177959    6543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:47.177962    6543 out.go:358] Setting ErrFile to fd 2...
	I0816 10:43:47.177972    6543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:47.178104    6543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:43:47.179110    6543 out.go:352] Setting JSON to false
	I0816 10:43:47.195464    6543 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4390,"bootTime":1723825837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:43:47.195534    6543 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:43:47.200712    6543 out.go:177] * [old-k8s-version-782000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:43:47.207684    6543 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:43:47.207727    6543 notify.go:220] Checking for updates...
	I0816 10:43:47.215668    6543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:43:47.218684    6543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:43:47.221674    6543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:43:47.224649    6543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:43:47.227687    6543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:43:47.230954    6543 config.go:182] Loaded profile config "old-k8s-version-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0816 10:43:47.234673    6543 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 10:43:47.237675    6543 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:43:47.241709    6543 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:43:47.248670    6543 start.go:297] selected driver: qemu2
	I0816 10:43:47.248677    6543 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:43:47.248748    6543 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:43:47.251168    6543 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:43:47.251196    6543 cni.go:84] Creating CNI manager for ""
	I0816 10:43:47.251203    6543 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 10:43:47.251224    6543 start.go:340] cluster config:
	{Name:old-k8s-version-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-782000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:43:47.254916    6543 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:47.262625    6543 out.go:177] * Starting "old-k8s-version-782000" primary control-plane node in "old-k8s-version-782000" cluster
	I0816 10:43:47.266691    6543 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 10:43:47.266712    6543 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 10:43:47.266723    6543 cache.go:56] Caching tarball of preloaded images
	I0816 10:43:47.266781    6543 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:43:47.266787    6543 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 10:43:47.266849    6543 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/old-k8s-version-782000/config.json ...
	I0816 10:43:47.267379    6543 start.go:360] acquireMachinesLock for old-k8s-version-782000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:47.267407    6543 start.go:364] duration metric: took 21.417µs to acquireMachinesLock for "old-k8s-version-782000"
	I0816 10:43:47.267416    6543 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:43:47.267422    6543 fix.go:54] fixHost starting: 
	I0816 10:43:47.267536    6543 fix.go:112] recreateIfNeeded on old-k8s-version-782000: state=Stopped err=<nil>
	W0816 10:43:47.267544    6543 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:43:47.271693    6543 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	I0816 10:43:47.278657    6543 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:47.278696    6543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ea:23:f0:b9:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:47.280604    6543 main.go:141] libmachine: STDOUT: 
	I0816 10:43:47.280622    6543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:47.280649    6543 fix.go:56] duration metric: took 13.228666ms for fixHost
	I0816 10:43:47.280653    6543 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 13.242167ms
	W0816 10:43:47.280660    6543 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:43:47.280697    6543 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:47.280701    6543 start.go:729] Will try again in 5 seconds ...
	I0816 10:43:52.282757    6543 start.go:360] acquireMachinesLock for old-k8s-version-782000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:52.283044    6543 start.go:364] duration metric: took 215.625µs to acquireMachinesLock for "old-k8s-version-782000"
	I0816 10:43:52.283095    6543 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:43:52.283106    6543 fix.go:54] fixHost starting: 
	I0816 10:43:52.283501    6543 fix.go:112] recreateIfNeeded on old-k8s-version-782000: state=Stopped err=<nil>
	W0816 10:43:52.283515    6543 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:43:52.292844    6543 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-782000" ...
	I0816 10:43:52.295914    6543 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:52.296035    6543 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:ea:23:f0:b9:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/old-k8s-version-782000/disk.qcow2
	I0816 10:43:52.301979    6543 main.go:141] libmachine: STDOUT: 
	I0816 10:43:52.302038    6543 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:52.302102    6543 fix.go:56] duration metric: took 18.996709ms for fixHost
	I0816 10:43:52.302116    6543 start.go:83] releasing machines lock for "old-k8s-version-782000", held for 19.058084ms
	W0816 10:43:52.302279    6543 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-782000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:52.310863    6543 out.go:201] 
	W0816 10:43:52.314930    6543 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:43:52.314953    6543 out.go:270] * 
	* 
	W0816 10:43:52.316194    6543 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:43:52.324861    6543 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-782000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (54.711041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-782000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (31.213208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-782000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.340292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-782000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-782000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (30.4175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-782000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (29.5695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-782000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-782000 --alsologtostderr -v=1: exit status 83 (42.961541ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-782000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-782000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:43:52.580201    6562 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:43:52.581072    6562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:52.581076    6562 out.go:358] Setting ErrFile to fd 2...
	I0816 10:43:52.581078    6562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:52.581224    6562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:43:52.581434    6562 out.go:352] Setting JSON to false
	I0816 10:43:52.581443    6562 mustload.go:65] Loading cluster: old-k8s-version-782000
	I0816 10:43:52.581642    6562 config.go:182] Loaded profile config "old-k8s-version-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0816 10:43:52.586473    6562 out.go:177] * The control-plane node old-k8s-version-782000 host is not running: state=Stopped
	I0816 10:43:52.589440    6562 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-782000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-782000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (29.633958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (29.598333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-782000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-873000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-873000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.782885125s)

                                                
                                                
-- stdout --
	* [no-preload-873000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-873000" primary control-plane node in "no-preload-873000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-873000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:43:52.899212    6579 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:43:52.902861    6579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:52.902865    6579 out.go:358] Setting ErrFile to fd 2...
	I0816 10:43:52.902867    6579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:43:52.903024    6579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:43:52.906129    6579 out.go:352] Setting JSON to false
	I0816 10:43:52.922672    6579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4395,"bootTime":1723825837,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:43:52.922728    6579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:43:52.927923    6579 out.go:177] * [no-preload-873000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:43:52.935895    6579 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:43:52.935923    6579 notify.go:220] Checking for updates...
	I0816 10:43:52.942792    6579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:43:52.945825    6579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:43:52.948901    6579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:43:52.951911    6579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:43:52.954843    6579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:43:52.958240    6579 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:43:52.958315    6579 config.go:182] Loaded profile config "stopped-upgrade-403000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0816 10:43:52.958367    6579 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:43:52.962786    6579 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:43:52.969899    6579 start.go:297] selected driver: qemu2
	I0816 10:43:52.969908    6579 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:43:52.969916    6579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:43:52.972371    6579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:43:52.974826    6579 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:43:52.977988    6579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:43:52.978009    6579 cni.go:84] Creating CNI manager for ""
	I0816 10:43:52.978030    6579 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:43:52.978035    6579 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:43:52.978071    6579 start.go:340] cluster config:
	{Name:no-preload-873000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:43:52.981763    6579 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.988820    6579 out.go:177] * Starting "no-preload-873000" primary control-plane node in "no-preload-873000" cluster
	I0816 10:43:52.992856    6579 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:43:52.992956    6579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/no-preload-873000/config.json ...
	I0816 10:43:52.992982    6579 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/no-preload-873000/config.json: {Name:mkf23e9b48259d8bc090a4a7a45dd20527437aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:43:52.992992    6579 cache.go:107] acquiring lock: {Name:mk86e1e0f0dd0a6c1f029b1a5f8e88f860876b98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993017    6579 cache.go:107] acquiring lock: {Name:mk75e94c99c6e780f8bf9e4e0d2bcdc82cbf5db6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993026    6579 cache.go:107] acquiring lock: {Name:mk6eebea7fba021b1184ebbfb9b3007517a09612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993070    6579 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 10:43:52.993079    6579 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.209µs
	I0816 10:43:52.993089    6579 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 10:43:52.993104    6579 cache.go:107] acquiring lock: {Name:mkc06d7af0cf5c44c8f698c8b96186c3959e0ba2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993130    6579 cache.go:107] acquiring lock: {Name:mkf10ea3a663ddc7a42eeabda7076b5e04cb28ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993184    6579 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0816 10:43:52.993191    6579 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 10:43:52.992995    6579 cache.go:107] acquiring lock: {Name:mkf6568f934bcb844e749583e819d52808a69739 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993196    6579 cache.go:107] acquiring lock: {Name:mka8074933ae002ecc0c58ac50edb11ffa51fd93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993292    6579 cache.go:107] acquiring lock: {Name:mka9f6f4ff6daab4a5bc6a340d881cd3e9dda8dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:43:52.993342    6579 start.go:360] acquireMachinesLock for no-preload-873000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:43:52.993390    6579 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0816 10:43:52.993413    6579 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 10:43:52.993392    6579 start.go:364] duration metric: took 42.667µs to acquireMachinesLock for "no-preload-873000"
	I0816 10:43:52.993445    6579 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 10:43:52.993451    6579 start.go:93] Provisioning new machine with config: &{Name:no-preload-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:43:52.993488    6579 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:43:52.993616    6579 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 10:43:52.993992    6579 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 10:43:53.000884    6579 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:43:53.001741    6579 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0816 10:43:53.005690    6579 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0816 10:43:53.005864    6579 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0816 10:43:53.005903    6579 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0816 10:43:53.005933    6579 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0816 10:43:53.006002    6579 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0816 10:43:53.006048    6579 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0816 10:43:53.018977    6579 start.go:159] libmachine.API.Create for "no-preload-873000" (driver="qemu2")
	I0816 10:43:53.019000    6579 client.go:168] LocalClient.Create starting
	I0816 10:43:53.019102    6579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:43:53.019132    6579 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:53.019142    6579 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:53.019184    6579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:43:53.019207    6579 main.go:141] libmachine: Decoding PEM data...
	I0816 10:43:53.019216    6579 main.go:141] libmachine: Parsing certificate...
	I0816 10:43:53.019565    6579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:43:53.179907    6579 main.go:141] libmachine: Creating SSH key...
	I0816 10:43:53.248126    6579 main.go:141] libmachine: Creating Disk image...
	I0816 10:43:53.248148    6579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:43:53.248339    6579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:43:53.258661    6579 main.go:141] libmachine: STDOUT: 
	I0816 10:43:53.258715    6579 main.go:141] libmachine: STDERR: 
	I0816 10:43:53.258765    6579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2 +20000M
	I0816 10:43:53.267965    6579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:43:53.267999    6579 main.go:141] libmachine: STDERR: 
	I0816 10:43:53.268024    6579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:43:53.268029    6579 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:43:53.268041    6579 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:43:53.268069    6579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a3:de:dd:bc:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:43:53.270257    6579 main.go:141] libmachine: STDOUT: 
	I0816 10:43:53.270476    6579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:43:53.270510    6579 client.go:171] duration metric: took 251.512375ms to LocalClient.Create
	I0816 10:43:53.359284    6579 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0816 10:43:53.392089    6579 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0816 10:43:53.404660    6579 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0816 10:43:53.405715    6579 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0816 10:43:53.438450    6579 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0816 10:43:53.525095    6579 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0816 10:43:53.525117    6579 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 532.103042ms
	I0816 10:43:53.525123    6579 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0816 10:43:53.548972    6579 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0816 10:43:53.568252    6579 cache.go:162] opening:  /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0816 10:43:55.270642    6579 start.go:128] duration metric: took 2.277195s to createHost
	I0816 10:43:55.270666    6579 start.go:83] releasing machines lock for "no-preload-873000", held for 2.277277584s
	W0816 10:43:55.270694    6579 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:55.283671    6579 out.go:177] * Deleting "no-preload-873000" in qemu2 ...
	W0816 10:43:55.299931    6579 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:43:55.299949    6579 start.go:729] Will try again in 5 seconds ...
	I0816 10:43:56.227962    6579 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0816 10:43:56.227998    6579 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 3.235057166s
	I0816 10:43:56.228011    6579 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0816 10:43:56.415484    6579 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0816 10:43:56.415524    6579 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.422418083s
	I0816 10:43:56.415538    6579 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0816 10:43:57.288999    6579 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0816 10:43:57.289014    6579 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.295971208s
	I0816 10:43:57.289020    6579 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0816 10:43:57.419103    6579 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0816 10:43:57.419115    6579 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.42622975s
	I0816 10:43:57.419122    6579 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0816 10:43:57.689223    6579 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0816 10:43:57.689243    6579 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 4.696369125s
	I0816 10:43:57.689253    6579 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0816 10:44:00.055543    6579 cache.go:157] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0816 10:44:00.055592    6579 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.06265025s
	I0816 10:44:00.055617    6579 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0816 10:44:00.055694    6579 cache.go:87] Successfully saved all images to host disk.
	I0816 10:44:00.301188    6579 start.go:360] acquireMachinesLock for no-preload-873000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:00.301455    6579 start.go:364] duration metric: took 203.667µs to acquireMachinesLock for "no-preload-873000"
	I0816 10:44:00.301536    6579 start.go:93] Provisioning new machine with config: &{Name:no-preload-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:44:00.301648    6579 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:44:00.311071    6579 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:44:00.345334    6579 start.go:159] libmachine.API.Create for "no-preload-873000" (driver="qemu2")
	I0816 10:44:00.345406    6579 client.go:168] LocalClient.Create starting
	I0816 10:44:00.345554    6579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:44:00.345616    6579 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:00.345633    6579 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:00.345692    6579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:44:00.345731    6579 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:00.345743    6579 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:00.346164    6579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:44:00.521708    6579 main.go:141] libmachine: Creating SSH key...
	I0816 10:44:00.592435    6579 main.go:141] libmachine: Creating Disk image...
	I0816 10:44:00.592446    6579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:44:00.592649    6579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:44:00.602308    6579 main.go:141] libmachine: STDOUT: 
	I0816 10:44:00.602329    6579 main.go:141] libmachine: STDERR: 
	I0816 10:44:00.602388    6579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2 +20000M
	I0816 10:44:00.610791    6579 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:44:00.610809    6579 main.go:141] libmachine: STDERR: 
	I0816 10:44:00.610820    6579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:44:00.610827    6579 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:44:00.610843    6579 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:00.610883    6579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d9:a3:11:fc:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:44:00.612659    6579 main.go:141] libmachine: STDOUT: 
	I0816 10:44:00.612677    6579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:00.612691    6579 client.go:171] duration metric: took 267.276625ms to LocalClient.Create
	I0816 10:44:02.613850    6579 start.go:128] duration metric: took 2.31223475s to createHost
	I0816 10:44:02.613871    6579 start.go:83] releasing machines lock for "no-preload-873000", held for 2.312459084s
	W0816 10:44:02.613971    6579 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-873000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-873000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:02.621712    6579 out.go:201] 
	W0816 10:44:02.628637    6579 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:02.628644    6579 out.go:270] * 
	* 
	W0816 10:44:02.629207    6579 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:02.641651    6579 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-873000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (44.368291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-873000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-873000 create -f testdata/busybox.yaml: exit status 1 (27.995917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-873000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-873000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (30.242333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (29.793083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-573000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-573000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.910050709s)

                                                
                                                
-- stdout --
	* [embed-certs-573000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-573000" primary control-plane node in "embed-certs-573000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-573000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:02.771577    6630 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:02.771721    6630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:02.771725    6630 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:02.771727    6630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:02.771854    6630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:02.773378    6630 out.go:352] Setting JSON to false
	I0816 10:44:02.792505    6630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4405,"bootTime":1723825837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:44:02.792615    6630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:44:02.797554    6630 out.go:177] * [embed-certs-573000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:44:02.804441    6630 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:44:02.804465    6630 notify.go:220] Checking for updates...
	I0816 10:44:02.811592    6630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:44:02.814567    6630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:44:02.817666    6630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:44:02.824583    6630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:44:02.833569    6630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:44:02.836855    6630 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:02.836956    6630 config.go:182] Loaded profile config "no-preload-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:02.837002    6630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:44:02.844616    6630 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:44:02.856528    6630 start.go:297] selected driver: qemu2
	I0816 10:44:02.856533    6630 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:44:02.856539    6630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:44:02.858903    6630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:44:02.861688    6630 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:44:02.865731    6630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:44:02.865753    6630 cni.go:84] Creating CNI manager for ""
	I0816 10:44:02.865769    6630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:44:02.865772    6630 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:44:02.865809    6630 start.go:340] cluster config:
	{Name:embed-certs-573000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-573000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:02.869650    6630 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:02.878625    6630 out.go:177] * Starting "embed-certs-573000" primary control-plane node in "embed-certs-573000" cluster
	I0816 10:44:02.882525    6630 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:44:02.882563    6630 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:44:02.882575    6630 cache.go:56] Caching tarball of preloaded images
	I0816 10:44:02.882658    6630 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:44:02.882664    6630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:44:02.882725    6630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/embed-certs-573000/config.json ...
	I0816 10:44:02.882736    6630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/embed-certs-573000/config.json: {Name:mk74ae9e5b77b8f50fce6651a85e74c3650b4756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:44:02.882962    6630 start.go:360] acquireMachinesLock for embed-certs-573000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:02.882993    6630 start.go:364] duration metric: took 25.208µs to acquireMachinesLock for "embed-certs-573000"
	I0816 10:44:02.883005    6630 start.go:93] Provisioning new machine with config: &{Name:embed-certs-573000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-573000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:44:02.883037    6630 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:44:02.890588    6630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:44:02.907055    6630 start.go:159] libmachine.API.Create for "embed-certs-573000" (driver="qemu2")
	I0816 10:44:02.907098    6630 client.go:168] LocalClient.Create starting
	I0816 10:44:02.907165    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:44:02.907195    6630 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:02.907204    6630 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:02.907246    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:44:02.907269    6630 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:02.907285    6630 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:02.907635    6630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:44:03.101006    6630 main.go:141] libmachine: Creating SSH key...
	I0816 10:44:03.237620    6630 main.go:141] libmachine: Creating Disk image...
	I0816 10:44:03.237626    6630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:44:03.237832    6630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:03.247495    6630 main.go:141] libmachine: STDOUT: 
	I0816 10:44:03.247514    6630 main.go:141] libmachine: STDERR: 
	I0816 10:44:03.247554    6630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2 +20000M
	I0816 10:44:03.255576    6630 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:44:03.255593    6630 main.go:141] libmachine: STDERR: 
	I0816 10:44:03.255604    6630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:03.255608    6630 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:44:03.255620    6630 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:03.255659    6630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:42:fe:e2:dd:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:03.257346    6630 main.go:141] libmachine: STDOUT: 
	I0816 10:44:03.257363    6630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:03.257380    6630 client.go:171] duration metric: took 350.286ms to LocalClient.Create
	I0816 10:44:05.259506    6630 start.go:128] duration metric: took 2.376502875s to createHost
	I0816 10:44:05.259593    6630 start.go:83] releasing machines lock for "embed-certs-573000", held for 2.376645709s
	W0816 10:44:05.259640    6630 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:05.275744    6630 out.go:177] * Deleting "embed-certs-573000" in qemu2 ...
	W0816 10:44:05.307008    6630 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:05.307038    6630 start.go:729] Will try again in 5 seconds ...
	I0816 10:44:10.309088    6630 start.go:360] acquireMachinesLock for embed-certs-573000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:10.309641    6630 start.go:364] duration metric: took 459.875µs to acquireMachinesLock for "embed-certs-573000"
	I0816 10:44:10.309763    6630 start.go:93] Provisioning new machine with config: &{Name:embed-certs-573000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-573000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:44:10.310021    6630 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:44:10.319561    6630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:44:10.371553    6630 start.go:159] libmachine.API.Create for "embed-certs-573000" (driver="qemu2")
	I0816 10:44:10.371604    6630 client.go:168] LocalClient.Create starting
	I0816 10:44:10.371704    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:44:10.371768    6630 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:10.371786    6630 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:10.371856    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:44:10.371901    6630 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:10.371916    6630 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:10.372557    6630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:44:10.531359    6630 main.go:141] libmachine: Creating SSH key...
	I0816 10:44:10.577252    6630 main.go:141] libmachine: Creating Disk image...
	I0816 10:44:10.577257    6630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:44:10.577445    6630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:10.586720    6630 main.go:141] libmachine: STDOUT: 
	I0816 10:44:10.586742    6630 main.go:141] libmachine: STDERR: 
	I0816 10:44:10.586786    6630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2 +20000M
	I0816 10:44:10.594779    6630 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:44:10.594794    6630 main.go:141] libmachine: STDERR: 
	I0816 10:44:10.594804    6630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:10.594808    6630 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:44:10.594823    6630 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:10.594863    6630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4f:05:44:08:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:10.596502    6630 main.go:141] libmachine: STDOUT: 
	I0816 10:44:10.596517    6630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:10.596529    6630 client.go:171] duration metric: took 224.925959ms to LocalClient.Create
	I0816 10:44:12.598711    6630 start.go:128] duration metric: took 2.288699083s to createHost
	I0816 10:44:12.598816    6630 start.go:83] releasing machines lock for "embed-certs-573000", held for 2.289199459s
	W0816 10:44:12.599172    6630 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-573000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-573000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:12.616776    6630 out.go:201] 
	W0816 10:44:12.623985    6630 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:12.624033    6630 out.go:270] * 
	* 
	W0816 10:44:12.626971    6630 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:12.635892    6630 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-573000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (65.896458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-873000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-873000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-873000 describe deploy/metrics-server -n kube-system: exit status 1 (27.27825ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-873000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-873000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (31.064292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-873000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-873000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.291065875s)

                                                
                                                
-- stdout --
	* [no-preload-873000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-873000" primary control-plane node in "no-preload-873000" cluster
	* Restarting existing qemu2 VM for "no-preload-873000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-873000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:06.409819    6670 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:06.409951    6670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:06.409959    6670 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:06.409963    6670 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:06.410110    6670 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:06.411133    6670 out.go:352] Setting JSON to false
	I0816 10:44:06.427136    6670 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4409,"bootTime":1723825837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:44:06.427201    6670 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:44:06.432158    6670 out.go:177] * [no-preload-873000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:44:06.435149    6670 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:44:06.435215    6670 notify.go:220] Checking for updates...
	I0816 10:44:06.443152    6670 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:44:06.446184    6670 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:44:06.450152    6670 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:44:06.453164    6670 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:44:06.456175    6670 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:44:06.459450    6670 config.go:182] Loaded profile config "no-preload-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:06.459704    6670 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:44:06.464106    6670 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:44:06.471110    6670 start.go:297] selected driver: qemu2
	I0816 10:44:06.471117    6670 start.go:901] validating driver "qemu2" against &{Name:no-preload-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-873000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:06.471172    6670 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:44:06.473561    6670 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:44:06.473603    6670 cni.go:84] Creating CNI manager for ""
	I0816 10:44:06.473613    6670 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:44:06.473642    6670 start.go:340] cluster config:
	{Name:no-preload-873000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-873000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:06.477324    6670 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.485940    6670 out.go:177] * Starting "no-preload-873000" primary control-plane node in "no-preload-873000" cluster
	I0816 10:44:06.490085    6670 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:44:06.490160    6670 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/no-preload-873000/config.json ...
	I0816 10:44:06.490170    6670 cache.go:107] acquiring lock: {Name:mk86e1e0f0dd0a6c1f029b1a5f8e88f860876b98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490176    6670 cache.go:107] acquiring lock: {Name:mk75e94c99c6e780f8bf9e4e0d2bcdc82cbf5db6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490223    6670 cache.go:107] acquiring lock: {Name:mkf10ea3a663ddc7a42eeabda7076b5e04cb28ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490232    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 10:44:06.490237    6670 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.083µs
	I0816 10:44:06.490244    6670 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 10:44:06.490250    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0816 10:44:06.490252    6670 cache.go:107] acquiring lock: {Name:mka8074933ae002ecc0c58ac50edb11ffa51fd93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490257    6670 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 90.666µs
	I0816 10:44:06.490262    6670 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0816 10:44:06.490278    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0816 10:44:06.490290    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0816 10:44:06.490288    6670 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 83.917µs
	I0816 10:44:06.490293    6670 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 42.833µs
	I0816 10:44:06.490296    6670 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0816 10:44:06.490297    6670 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0816 10:44:06.490296    6670 cache.go:107] acquiring lock: {Name:mkc06d7af0cf5c44c8f698c8b96186c3959e0ba2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490306    6670 cache.go:107] acquiring lock: {Name:mk6eebea7fba021b1184ebbfb9b3007517a09612 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490339    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0816 10:44:06.490344    6670 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 49.25µs
	I0816 10:44:06.490348    6670 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0816 10:44:06.490343    6670 cache.go:107] acquiring lock: {Name:mkf6568f934bcb844e749583e819d52808a69739 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490352    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0816 10:44:06.490372    6670 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 66.25µs
	I0816 10:44:06.490375    6670 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0816 10:44:06.490396    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0816 10:44:06.490404    6670 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 234.416µs
	I0816 10:44:06.490408    6670 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0816 10:44:06.490404    6670 cache.go:107] acquiring lock: {Name:mka9f6f4ff6daab4a5bc6a340d881cd3e9dda8dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:06.490454    6670 cache.go:115] /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0816 10:44:06.490464    6670 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 73.5µs
	I0816 10:44:06.490469    6670 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0816 10:44:06.490473    6670 cache.go:87] Successfully saved all images to host disk.
	I0816 10:44:06.490574    6670 start.go:360] acquireMachinesLock for no-preload-873000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:06.490610    6670 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "no-preload-873000"
	I0816 10:44:06.490620    6670 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:06.490626    6670 fix.go:54] fixHost starting: 
	I0816 10:44:06.490748    6670 fix.go:112] recreateIfNeeded on no-preload-873000: state=Stopped err=<nil>
	W0816 10:44:06.490756    6670 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:06.498105    6670 out.go:177] * Restarting existing qemu2 VM for "no-preload-873000" ...
	I0816 10:44:06.502203    6670 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:06.502249    6670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d9:a3:11:fc:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:44:06.504211    6670 main.go:141] libmachine: STDOUT: 
	I0816 10:44:06.504238    6670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:06.504266    6670 fix.go:56] duration metric: took 13.641833ms for fixHost
	I0816 10:44:06.504269    6670 start.go:83] releasing machines lock for "no-preload-873000", held for 13.655709ms
	W0816 10:44:06.504276    6670 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:06.504300    6670 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:06.504305    6670 start.go:729] Will try again in 5 seconds ...
	I0816 10:44:11.506384    6670 start.go:360] acquireMachinesLock for no-preload-873000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:12.598996    6670 start.go:364] duration metric: took 1.092492s to acquireMachinesLock for "no-preload-873000"
	I0816 10:44:12.599180    6670 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:12.599206    6670 fix.go:54] fixHost starting: 
	I0816 10:44:12.600407    6670 fix.go:112] recreateIfNeeded on no-preload-873000: state=Stopped err=<nil>
	W0816 10:44:12.600435    6670 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:12.609761    6670 out.go:177] * Restarting existing qemu2 VM for "no-preload-873000" ...
	I0816 10:44:12.619817    6670 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:12.620064    6670 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:d9:a3:11:fc:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/no-preload-873000/disk.qcow2
	I0816 10:44:12.630142    6670 main.go:141] libmachine: STDOUT: 
	I0816 10:44:12.630213    6670 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:12.630312    6670 fix.go:56] duration metric: took 31.110917ms for fixHost
	I0816 10:44:12.630336    6670 start.go:83] releasing machines lock for "no-preload-873000", held for 31.288459ms
	W0816 10:44:12.630555    6670 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-873000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:12.642881    6670 out.go:201] 
	W0816 10:44:12.650901    6670 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:12.650947    6670 out.go:270] * 
	* 
	W0816 10:44:12.653726    6670 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:12.665907    6670 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-873000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (52.421833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-573000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-573000 create -f testdata/busybox.yaml: exit status 1 (30.846583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-573000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-573000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (29.720541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (33.124875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-873000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (34.0775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-873000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-873000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-873000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.936583ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-873000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-873000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (31.494958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-573000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-573000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-573000 describe deploy/metrics-server -n kube-system: exit status 1 (28.482875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-573000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-573000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (31.321083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-873000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (32.509167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-873000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-873000 --alsologtostderr -v=1: exit status 83 (40.204125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-873000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-873000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:12.934600    6707 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:12.934773    6707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:12.934776    6707 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:12.934779    6707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:12.934902    6707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:12.935124    6707 out.go:352] Setting JSON to false
	I0816 10:44:12.935134    6707 mustload.go:65] Loading cluster: no-preload-873000
	I0816 10:44:12.935333    6707 config.go:182] Loaded profile config "no-preload-873000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:12.939396    6707 out.go:177] * The control-plane node no-preload-873000 host is not running: state=Stopped
	I0816 10:44:12.942481    6707 out.go:177]   To start a cluster, run: "minikube start -p no-preload-873000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-873000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (29.282667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (28.852916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-873000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.871704459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-353000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:13.359283    6738 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:13.359427    6738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:13.359430    6738 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:13.359433    6738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:13.359562    6738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:13.360642    6738 out.go:352] Setting JSON to false
	I0816 10:44:13.376734    6738 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4416,"bootTime":1723825837,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:44:13.376811    6738 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:44:13.380483    6738 out.go:177] * [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:44:13.388301    6738 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:44:13.388346    6738 notify.go:220] Checking for updates...
	I0816 10:44:13.395467    6738 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:44:13.398353    6738 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:44:13.401384    6738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:44:13.404388    6738 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:44:13.405807    6738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:44:13.409701    6738 config.go:182] Loaded profile config "embed-certs-573000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:13.409759    6738 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:13.409801    6738 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:44:13.414425    6738 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:44:13.420337    6738 start.go:297] selected driver: qemu2
	I0816 10:44:13.420344    6738 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:44:13.420351    6738 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:44:13.422573    6738 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 10:44:13.425409    6738 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:44:13.428576    6738 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:44:13.428595    6738 cni.go:84] Creating CNI manager for ""
	I0816 10:44:13.428605    6738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:44:13.428613    6738 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:44:13.428637    6738 start.go:340] cluster config:
	{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:13.432405    6738 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:13.440321    6738 out.go:177] * Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	I0816 10:44:13.444343    6738 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:44:13.444357    6738 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:44:13.444367    6738 cache.go:56] Caching tarball of preloaded images
	I0816 10:44:13.444421    6738 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:44:13.444426    6738 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:44:13.444502    6738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/default-k8s-diff-port-353000/config.json ...
	I0816 10:44:13.444513    6738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/default-k8s-diff-port-353000/config.json: {Name:mk6d3ee91f96a0eb766389272ad509d994273ea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:44:13.444912    6738 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:13.444953    6738 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0816 10:44:13.444967    6738 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:44:13.444994    6738 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:44:13.449413    6738 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:44:13.466761    6738 start.go:159] libmachine.API.Create for "default-k8s-diff-port-353000" (driver="qemu2")
	I0816 10:44:13.466791    6738 client.go:168] LocalClient.Create starting
	I0816 10:44:13.466851    6738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:44:13.466886    6738 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:13.466894    6738 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:13.466932    6738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:44:13.466955    6738 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:13.466962    6738 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:13.467502    6738 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:44:13.620849    6738 main.go:141] libmachine: Creating SSH key...
	I0816 10:44:13.688617    6738 main.go:141] libmachine: Creating Disk image...
	I0816 10:44:13.688622    6738 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:44:13.688795    6738 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:13.698018    6738 main.go:141] libmachine: STDOUT: 
	I0816 10:44:13.698035    6738 main.go:141] libmachine: STDERR: 
	I0816 10:44:13.698082    6738 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2 +20000M
	I0816 10:44:13.705937    6738 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:44:13.705951    6738 main.go:141] libmachine: STDERR: 
	I0816 10:44:13.705967    6738 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:13.705976    6738 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:44:13.705987    6738 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:13.706009    6738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:db:6c:9a:e9:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:13.707579    6738 main.go:141] libmachine: STDOUT: 
	I0816 10:44:13.707594    6738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:13.707613    6738 client.go:171] duration metric: took 240.821292ms to LocalClient.Create
	I0816 10:44:15.709788    6738 start.go:128] duration metric: took 2.264824917s to createHost
	I0816 10:44:15.709867    6738 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 2.264957083s
	W0816 10:44:15.709968    6738 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:15.728345    6738 out.go:177] * Deleting "default-k8s-diff-port-353000" in qemu2 ...
	W0816 10:44:15.757378    6738 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:15.757412    6738 start.go:729] Will try again in 5 seconds ...
	I0816 10:44:20.759569    6738 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:20.759975    6738 start.go:364] duration metric: took 302.542µs to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0816 10:44:20.760118    6738 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:44:20.760432    6738 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:44:20.769215    6738 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:44:20.821430    6738 start.go:159] libmachine.API.Create for "default-k8s-diff-port-353000" (driver="qemu2")
	I0816 10:44:20.821474    6738 client.go:168] LocalClient.Create starting
	I0816 10:44:20.821598    6738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:44:20.821676    6738 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:20.821697    6738 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:20.821752    6738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:44:20.821797    6738 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:20.821808    6738 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:20.822421    6738 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:44:20.988706    6738 main.go:141] libmachine: Creating SSH key...
	I0816 10:44:21.112382    6738 main.go:141] libmachine: Creating Disk image...
	I0816 10:44:21.112390    6738 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:44:21.112578    6738 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:21.122094    6738 main.go:141] libmachine: STDOUT: 
	I0816 10:44:21.122112    6738 main.go:141] libmachine: STDERR: 
	I0816 10:44:21.122175    6738 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2 +20000M
	I0816 10:44:21.130244    6738 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:44:21.130257    6738 main.go:141] libmachine: STDERR: 
	I0816 10:44:21.130270    6738 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:21.130275    6738 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:44:21.130285    6738 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:21.130310    6738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:3a:64:4d:2b:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:21.131935    6738 main.go:141] libmachine: STDOUT: 
	I0816 10:44:21.131950    6738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:21.131963    6738 client.go:171] duration metric: took 310.490208ms to LocalClient.Create
	I0816 10:44:23.134091    6738 start.go:128] duration metric: took 2.37368575s to createHost
	I0816 10:44:23.134190    6738 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 2.374246166s
	W0816 10:44:23.134496    6738 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:23.151086    6738 out.go:201] 
	W0816 10:44:23.158095    6738 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:23.158147    6738 out.go:270] * 
	* 
	W0816 10:44:23.160698    6738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:23.174042    6738 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (71.216458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-573000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-573000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.757879417s)

                                                
                                                
-- stdout --
	* [embed-certs-573000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-573000" primary control-plane node in "embed-certs-573000" cluster
	* Restarting existing qemu2 VM for "embed-certs-573000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-573000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:16.483063    6765 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:16.483216    6765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:16.483220    6765 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:16.483222    6765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:16.483361    6765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:16.484362    6765 out.go:352] Setting JSON to false
	I0816 10:44:16.500378    6765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4419,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:44:16.500453    6765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:44:16.503961    6765 out.go:177] * [embed-certs-573000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:44:16.510899    6765 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:44:16.510930    6765 notify.go:220] Checking for updates...
	I0816 10:44:16.517872    6765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:44:16.520932    6765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:44:16.523918    6765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:44:16.526818    6765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:44:16.529883    6765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:44:16.533233    6765 config.go:182] Loaded profile config "embed-certs-573000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:16.533495    6765 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:44:16.536901    6765 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:44:16.543885    6765 start.go:297] selected driver: qemu2
	I0816 10:44:16.543894    6765 start.go:901] validating driver "qemu2" against &{Name:embed-certs-573000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-573000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:16.543974    6765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:44:16.546355    6765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:44:16.546381    6765 cni.go:84] Creating CNI manager for ""
	I0816 10:44:16.546389    6765 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:44:16.546418    6765 start.go:340] cluster config:
	{Name:embed-certs-573000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-573000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:16.549881    6765 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:16.557915    6765 out.go:177] * Starting "embed-certs-573000" primary control-plane node in "embed-certs-573000" cluster
	I0816 10:44:16.561983    6765 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:44:16.562004    6765 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:44:16.562015    6765 cache.go:56] Caching tarball of preloaded images
	I0816 10:44:16.562086    6765 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:44:16.562092    6765 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:44:16.562151    6765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/embed-certs-573000/config.json ...
	I0816 10:44:16.562655    6765 start.go:360] acquireMachinesLock for embed-certs-573000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:16.562692    6765 start.go:364] duration metric: took 31.416µs to acquireMachinesLock for "embed-certs-573000"
	I0816 10:44:16.562702    6765 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:16.562709    6765 fix.go:54] fixHost starting: 
	I0816 10:44:16.562837    6765 fix.go:112] recreateIfNeeded on embed-certs-573000: state=Stopped err=<nil>
	W0816 10:44:16.562845    6765 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:16.566895    6765 out.go:177] * Restarting existing qemu2 VM for "embed-certs-573000" ...
	I0816 10:44:16.570913    6765 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:16.570963    6765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4f:05:44:08:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:16.573065    6765 main.go:141] libmachine: STDOUT: 
	I0816 10:44:16.573084    6765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:16.573114    6765 fix.go:56] duration metric: took 10.406333ms for fixHost
	I0816 10:44:16.573119    6765 start.go:83] releasing machines lock for "embed-certs-573000", held for 10.423125ms
	W0816 10:44:16.573125    6765 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:16.573160    6765 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:16.573164    6765 start.go:729] Will try again in 5 seconds ...
	I0816 10:44:21.575277    6765 start.go:360] acquireMachinesLock for embed-certs-573000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:23.134406    6765 start.go:364] duration metric: took 1.559041416s to acquireMachinesLock for "embed-certs-573000"
	I0816 10:44:23.134597    6765 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:23.134621    6765 fix.go:54] fixHost starting: 
	I0816 10:44:23.135347    6765 fix.go:112] recreateIfNeeded on embed-certs-573000: state=Stopped err=<nil>
	W0816 10:44:23.135374    6765 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:23.155017    6765 out.go:177] * Restarting existing qemu2 VM for "embed-certs-573000" ...
	I0816 10:44:23.161935    6765 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:23.162151    6765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:4f:05:44:08:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/embed-certs-573000/disk.qcow2
	I0816 10:44:23.171539    6765 main.go:141] libmachine: STDOUT: 
	I0816 10:44:23.171625    6765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:23.171723    6765 fix.go:56] duration metric: took 37.102458ms for fixHost
	I0816 10:44:23.171748    6765 start.go:83] releasing machines lock for "embed-certs-573000", held for 37.276791ms
	W0816 10:44:23.171972    6765 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-573000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-573000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:23.186094    6765 out.go:201] 
	W0816 10:44:23.190182    6765 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:23.190223    6765 out.go:270] * 
	* 
	W0816 10:44:23.193195    6765 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:23.203024    6765 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-573000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (63.213042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-353000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-353000 create -f testdata/busybox.yaml: exit status 1 (31.376917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-353000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-353000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (31.827708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (33.367084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-573000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (34.913084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-573000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-573000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-573000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.926958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-573000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-573000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (30.043125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-353000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-353000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-353000 describe deploy/metrics-server -n kube-system: exit status 1 (28.148667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-353000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-353000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (30.660875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-573000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (31.594167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-573000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-573000 --alsologtostderr -v=1: exit status 83 (49.935209ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-573000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-573000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:23.496688    6798 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:23.496840    6798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:23.496843    6798 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:23.496846    6798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:23.496978    6798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:23.497229    6798 out.go:352] Setting JSON to false
	I0816 10:44:23.497237    6798 mustload.go:65] Loading cluster: embed-certs-573000
	I0816 10:44:23.497439    6798 config.go:182] Loaded profile config "embed-certs-573000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:23.501890    6798 out.go:177] * The control-plane node embed-certs-573000 host is not running: state=Stopped
	I0816 10:44:23.509922    6798 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-573000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-573000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (30.086167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (27.858208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-573000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-972000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-972000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.8414135s)

                                                
                                                
-- stdout --
	* [newest-cni-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-972000" primary control-plane node in "newest-cni-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:23.808729    6821 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:23.808841    6821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:23.808844    6821 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:23.808846    6821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:23.808972    6821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:23.810018    6821 out.go:352] Setting JSON to false
	I0816 10:44:23.826132    6821 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4426,"bootTime":1723825837,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:44:23.826198    6821 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:44:23.830942    6821 out.go:177] * [newest-cni-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:44:23.837811    6821 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:44:23.837839    6821 notify.go:220] Checking for updates...
	I0816 10:44:23.843951    6821 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:44:23.845494    6821 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:44:23.848883    6821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:44:23.851948    6821 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:44:23.854938    6821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:44:23.858288    6821 config.go:182] Loaded profile config "default-k8s-diff-port-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:23.858354    6821 config.go:182] Loaded profile config "multinode-420000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:23.858399    6821 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:44:23.862925    6821 out.go:177] * Using the qemu2 driver based on user configuration
	I0816 10:44:23.869861    6821 start.go:297] selected driver: qemu2
	I0816 10:44:23.869867    6821 start.go:901] validating driver "qemu2" against <nil>
	I0816 10:44:23.869873    6821 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:44:23.872225    6821 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0816 10:44:23.872252    6821 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0816 10:44:23.880893    6821 out.go:177] * Automatically selected the socket_vmnet network
	I0816 10:44:23.883968    6821 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 10:44:23.883998    6821 cni.go:84] Creating CNI manager for ""
	I0816 10:44:23.884007    6821 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:44:23.884011    6821 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 10:44:23.884035    6821 start.go:340] cluster config:
	{Name:newest-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:23.887860    6821 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:23.893836    6821 out.go:177] * Starting "newest-cni-972000" primary control-plane node in "newest-cni-972000" cluster
	I0816 10:44:23.897889    6821 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:44:23.897908    6821 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:44:23.897922    6821 cache.go:56] Caching tarball of preloaded images
	I0816 10:44:23.897992    6821 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:44:23.897998    6821 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:44:23.898072    6821 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/newest-cni-972000/config.json ...
	I0816 10:44:23.898085    6821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/newest-cni-972000/config.json: {Name:mkd1ed601aa685734f82f2b1598b32676c743f57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 10:44:23.898512    6821 start.go:360] acquireMachinesLock for newest-cni-972000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:23.898547    6821 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "newest-cni-972000"
	I0816 10:44:23.898562    6821 start.go:93] Provisioning new machine with config: &{Name:newest-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:44:23.898597    6821 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:44:23.902850    6821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:44:23.921002    6821 start.go:159] libmachine.API.Create for "newest-cni-972000" (driver="qemu2")
	I0816 10:44:23.921030    6821 client.go:168] LocalClient.Create starting
	I0816 10:44:23.921087    6821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:44:23.921122    6821 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:23.921133    6821 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:23.921171    6821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:44:23.921195    6821 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:23.921203    6821 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:23.921747    6821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:44:24.069361    6821 main.go:141] libmachine: Creating SSH key...
	I0816 10:44:24.118770    6821 main.go:141] libmachine: Creating Disk image...
	I0816 10:44:24.118775    6821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:44:24.118964    6821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:24.128236    6821 main.go:141] libmachine: STDOUT: 
	I0816 10:44:24.128259    6821 main.go:141] libmachine: STDERR: 
	I0816 10:44:24.128310    6821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2 +20000M
	I0816 10:44:24.136184    6821 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:44:24.136199    6821 main.go:141] libmachine: STDERR: 
	I0816 10:44:24.136218    6821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:24.136223    6821 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:44:24.136234    6821 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:24.136258    6821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:66:62:bd:5c:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:24.137856    6821 main.go:141] libmachine: STDOUT: 
	I0816 10:44:24.137870    6821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:24.137889    6821 client.go:171] duration metric: took 216.861375ms to LocalClient.Create
	I0816 10:44:26.140029    6821 start.go:128] duration metric: took 2.241464333s to createHost
	I0816 10:44:26.140078    6821 start.go:83] releasing machines lock for "newest-cni-972000", held for 2.241572s
	W0816 10:44:26.140150    6821 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:26.152384    6821 out.go:177] * Deleting "newest-cni-972000" in qemu2 ...
	W0816 10:44:26.182485    6821 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:26.182513    6821 start.go:729] Will try again in 5 seconds ...
	I0816 10:44:31.182921    6821 start.go:360] acquireMachinesLock for newest-cni-972000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:31.183447    6821 start.go:364] duration metric: took 399.084µs to acquireMachinesLock for "newest-cni-972000"
	I0816 10:44:31.183595    6821 start.go:93] Provisioning new machine with config: &{Name:newest-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0816 10:44:31.183947    6821 start.go:125] createHost starting for "" (driver="qemu2")
	I0816 10:44:31.192576    6821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 10:44:31.244032    6821 start.go:159] libmachine.API.Create for "newest-cni-972000" (driver="qemu2")
	I0816 10:44:31.244082    6821 client.go:168] LocalClient.Create starting
	I0816 10:44:31.244201    6821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/ca.pem
	I0816 10:44:31.244274    6821 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:31.244289    6821 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:31.244350    6821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19461-1189/.minikube/certs/cert.pem
	I0816 10:44:31.244395    6821 main.go:141] libmachine: Decoding PEM data...
	I0816 10:44:31.244410    6821 main.go:141] libmachine: Parsing certificate...
	I0816 10:44:31.244930    6821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso...
	I0816 10:44:31.404041    6821 main.go:141] libmachine: Creating SSH key...
	I0816 10:44:31.543670    6821 main.go:141] libmachine: Creating Disk image...
	I0816 10:44:31.543676    6821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0816 10:44:31.543870    6821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:31.553366    6821 main.go:141] libmachine: STDOUT: 
	I0816 10:44:31.553384    6821 main.go:141] libmachine: STDERR: 
	I0816 10:44:31.553429    6821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2 +20000M
	I0816 10:44:31.561254    6821 main.go:141] libmachine: STDOUT: Image resized.
	
	I0816 10:44:31.561268    6821 main.go:141] libmachine: STDERR: 
	I0816 10:44:31.561283    6821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:31.561287    6821 main.go:141] libmachine: Starting QEMU VM...
	I0816 10:44:31.561297    6821 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:31.561329    6821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:30:d6:a1:21:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:31.562891    6821 main.go:141] libmachine: STDOUT: 
	I0816 10:44:31.562909    6821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:31.562921    6821 client.go:171] duration metric: took 318.83925ms to LocalClient.Create
	I0816 10:44:33.565046    6821 start.go:128] duration metric: took 2.381123916s to createHost
	I0816 10:44:33.565105    6821 start.go:83] releasing machines lock for "newest-cni-972000", held for 2.3816745s
	W0816 10:44:33.565512    6821 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:33.582975    6821 out.go:201] 
	W0816 10:44:33.589123    6821 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:33.589170    6821 out.go:270] * 
	* 
	W0816 10:44:33.591591    6821 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:33.604088    6821 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-972000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000: exit status 7 (67.114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.233811s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:27.440670    6851 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:27.440800    6851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:27.440804    6851 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:27.440806    6851 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:27.440939    6851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:27.441968    6851 out.go:352] Setting JSON to false
	I0816 10:44:27.458247    6851 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4430,"bootTime":1723825837,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:44:27.458312    6851 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:44:27.463838    6851 out.go:177] * [default-k8s-diff-port-353000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:44:27.470842    6851 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:44:27.470892    6851 notify.go:220] Checking for updates...
	I0816 10:44:27.477799    6851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:44:27.480738    6851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:44:27.483785    6851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:44:27.486821    6851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:44:27.489774    6851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:44:27.493077    6851 config.go:182] Loaded profile config "default-k8s-diff-port-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:27.493345    6851 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:44:27.497756    6851 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:44:27.504878    6851 start.go:297] selected driver: qemu2
	I0816 10:44:27.504887    6851 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-353000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:27.504960    6851 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:44:27.507277    6851 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 10:44:27.507305    6851 cni.go:84] Creating CNI manager for ""
	I0816 10:44:27.507313    6851 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:44:27.507344    6851 start.go:340] cluster config:
	{Name:default-k8s-diff-port-353000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-353000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:27.510959    6851 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:27.518768    6851 out.go:177] * Starting "default-k8s-diff-port-353000" primary control-plane node in "default-k8s-diff-port-353000" cluster
	I0816 10:44:27.522852    6851 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:44:27.522869    6851 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:44:27.522881    6851 cache.go:56] Caching tarball of preloaded images
	I0816 10:44:27.522947    6851 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:44:27.522953    6851 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:44:27.523015    6851 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/default-k8s-diff-port-353000/config.json ...
	I0816 10:44:27.523569    6851 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:27.523604    6851 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0816 10:44:27.523614    6851 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:27.523622    6851 fix.go:54] fixHost starting: 
	I0816 10:44:27.523745    6851 fix.go:112] recreateIfNeeded on default-k8s-diff-port-353000: state=Stopped err=<nil>
	W0816 10:44:27.523753    6851 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:27.528738    6851 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	I0816 10:44:27.536671    6851 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:27.536712    6851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:3a:64:4d:2b:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:27.538805    6851 main.go:141] libmachine: STDOUT: 
	I0816 10:44:27.538828    6851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:27.538859    6851 fix.go:56] duration metric: took 15.238917ms for fixHost
	I0816 10:44:27.538865    6851 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 15.256333ms
	W0816 10:44:27.538870    6851 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:27.538914    6851 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:27.538920    6851 start.go:729] Will try again in 5 seconds ...
	I0816 10:44:32.541027    6851 start.go:360] acquireMachinesLock for default-k8s-diff-port-353000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:33.565296    6851 start.go:364] duration metric: took 1.024183709s to acquireMachinesLock for "default-k8s-diff-port-353000"
	I0816 10:44:33.565436    6851 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:33.565456    6851 fix.go:54] fixHost starting: 
	I0816 10:44:33.566217    6851 fix.go:112] recreateIfNeeded on default-k8s-diff-port-353000: state=Stopped err=<nil>
	W0816 10:44:33.566244    6851 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:33.586072    6851 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-353000" ...
	I0816 10:44:33.592020    6851 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:33.592173    6851 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:3a:64:4d:2b:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/default-k8s-diff-port-353000/disk.qcow2
	I0816 10:44:33.601270    6851 main.go:141] libmachine: STDOUT: 
	I0816 10:44:33.601336    6851 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:33.601424    6851 fix.go:56] duration metric: took 35.970292ms for fixHost
	I0816 10:44:33.601442    6851 start.go:83] releasing machines lock for "default-k8s-diff-port-353000", held for 36.109416ms
	W0816 10:44:33.601628    6851 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-353000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:33.611948    6851 out.go:201] 
	W0816 10:44:33.622327    6851 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:33.622363    6851 out.go:270] * 
	* 
	W0816 10:44:33.624923    6851 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:33.641148    6851 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-353000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (51.674875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-353000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (34.571167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-353000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-353000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-353000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.415166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-353000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-353000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (35.210667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-353000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (28.934375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-353000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-353000 --alsologtostderr -v=1: exit status 83 (40.766208ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-353000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-353000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:33.891651    6884 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:33.891809    6884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:33.891812    6884 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:33.891815    6884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:33.891946    6884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:33.892162    6884 out.go:352] Setting JSON to false
	I0816 10:44:33.892174    6884 mustload.go:65] Loading cluster: default-k8s-diff-port-353000
	I0816 10:44:33.892357    6884 config.go:182] Loaded profile config "default-k8s-diff-port-353000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:33.897082    6884 out.go:177] * The control-plane node default-k8s-diff-port-353000 host is not running: state=Stopped
	I0816 10:44:33.900980    6884 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-353000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-353000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (28.428416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (29.570584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-353000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-972000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-972000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.187281s)

                                                
                                                
-- stdout --
	* [newest-cni-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-972000" primary control-plane node in "newest-cni-972000" cluster
	* Restarting existing qemu2 VM for "newest-cni-972000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-972000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:37.892247    6921 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:37.892376    6921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:37.892379    6921 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:37.892381    6921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:37.892514    6921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:37.896821    6921 out.go:352] Setting JSON to false
	I0816 10:44:37.913134    6921 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4440,"bootTime":1723825837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 10:44:37.913225    6921 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 10:44:37.918444    6921 out.go:177] * [newest-cni-972000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 10:44:37.926432    6921 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 10:44:37.926493    6921 notify.go:220] Checking for updates...
	I0816 10:44:37.931028    6921 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 10:44:37.934381    6921 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 10:44:37.937390    6921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 10:44:37.940440    6921 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 10:44:37.943350    6921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 10:44:37.946653    6921 config.go:182] Loaded profile config "newest-cni-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:37.946896    6921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 10:44:37.951371    6921 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 10:44:37.958298    6921 start.go:297] selected driver: qemu2
	I0816 10:44:37.958304    6921 start.go:901] validating driver "qemu2" against &{Name:newest-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:37.958355    6921 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 10:44:37.960717    6921 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 10:44:37.960762    6921 cni.go:84] Creating CNI manager for ""
	I0816 10:44:37.960778    6921 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 10:44:37.960809    6921 start.go:340] cluster config:
	{Name:newest-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-972000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 10:44:37.964321    6921 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 10:44:37.972372    6921 out.go:177] * Starting "newest-cni-972000" primary control-plane node in "newest-cni-972000" cluster
	I0816 10:44:37.976362    6921 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 10:44:37.976380    6921 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 10:44:37.976392    6921 cache.go:56] Caching tarball of preloaded images
	I0816 10:44:37.976447    6921 preload.go:172] Found /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 10:44:37.976453    6921 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0816 10:44:37.976534    6921 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/newest-cni-972000/config.json ...
	I0816 10:44:37.977053    6921 start.go:360] acquireMachinesLock for newest-cni-972000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:37.977085    6921 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "newest-cni-972000"
	I0816 10:44:37.977095    6921 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:37.977100    6921 fix.go:54] fixHost starting: 
	I0816 10:44:37.977220    6921 fix.go:112] recreateIfNeeded on newest-cni-972000: state=Stopped err=<nil>
	W0816 10:44:37.977227    6921 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:37.981391    6921 out.go:177] * Restarting existing qemu2 VM for "newest-cni-972000" ...
	I0816 10:44:37.989304    6921 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:37.989346    6921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:30:d6:a1:21:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:37.991541    6921 main.go:141] libmachine: STDOUT: 
	I0816 10:44:37.991562    6921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:37.991594    6921 fix.go:56] duration metric: took 14.495041ms for fixHost
	I0816 10:44:37.991598    6921 start.go:83] releasing machines lock for "newest-cni-972000", held for 14.508958ms
	W0816 10:44:37.991604    6921 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:37.991639    6921 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:37.991644    6921 start.go:729] Will try again in 5 seconds ...
	I0816 10:44:42.993650    6921 start.go:360] acquireMachinesLock for newest-cni-972000: {Name:mka48981493194f3d87e2fd5087b3f9ced6a3d8b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 10:44:42.994085    6921 start.go:364] duration metric: took 333.625µs to acquireMachinesLock for "newest-cni-972000"
	I0816 10:44:42.994199    6921 start.go:96] Skipping create...Using existing machine configuration
	I0816 10:44:42.994219    6921 fix.go:54] fixHost starting: 
	I0816 10:44:42.994909    6921 fix.go:112] recreateIfNeeded on newest-cni-972000: state=Stopped err=<nil>
	W0816 10:44:42.994936    6921 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 10:44:43.004294    6921 out.go:177] * Restarting existing qemu2 VM for "newest-cni-972000" ...
	I0816 10:44:43.007174    6921 qemu.go:418] Using hvf for hardware acceleration
	I0816 10:44:43.007411    6921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:30:d6:a1:21:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19461-1189/.minikube/machines/newest-cni-972000/disk.qcow2
	I0816 10:44:43.016445    6921 main.go:141] libmachine: STDOUT: 
	I0816 10:44:43.016506    6921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0816 10:44:43.016582    6921 fix.go:56] duration metric: took 22.359959ms for fixHost
	I0816 10:44:43.016599    6921 start.go:83] releasing machines lock for "newest-cni-972000", held for 22.490083ms
	W0816 10:44:43.016766    6921 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-972000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-972000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0816 10:44:43.025238    6921 out.go:201] 
	W0816 10:44:43.029247    6921 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0816 10:44:43.029269    6921 out.go:270] * 
	* 
	W0816 10:44:43.031931    6921 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 10:44:43.039208    6921 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-972000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000: exit status 7 (68.980041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-972000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000: exit status 7 (29.865041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-972000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-972000 --alsologtostderr -v=1: exit status 83 (40.475792ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-972000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-972000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 10:44:43.223299    6935 out.go:345] Setting OutFile to fd 1 ...
	I0816 10:44:43.223463    6935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:43.223466    6935 out.go:358] Setting ErrFile to fd 2...
	I0816 10:44:43.223467    6935 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 10:44:43.223593    6935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 10:44:43.223814    6935 out.go:352] Setting JSON to false
	I0816 10:44:43.223822    6935 mustload.go:65] Loading cluster: newest-cni-972000
	I0816 10:44:43.224027    6935 config.go:182] Loaded profile config "newest-cni-972000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 10:44:43.227909    6935 out.go:177] * The control-plane node newest-cni-972000 host is not running: state=Stopped
	I0816 10:44:43.231941    6935 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-972000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-972000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000: exit status 7 (30.68175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-972000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000: exit status 7 (29.136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-972000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (156/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 7.06
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.11
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 197.85
29 TestAddons/serial/Volcano 37.38
31 TestAddons/serial/GCPAuth/Namespaces 0.09
33 TestAddons/parallel/Registry 14.62
34 TestAddons/parallel/Ingress 19.96
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.29
39 TestAddons/parallel/CSI 51.79
40 TestAddons/parallel/Headlamp 16.64
41 TestAddons/parallel/CloudSpanner 5.21
42 TestAddons/parallel/LocalPath 41.82
43 TestAddons/parallel/NvidiaDevicePlugin 6.2
44 TestAddons/parallel/Yakd 10.3
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 10.7
56 TestErrorSpam/setup 34.64
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.23
59 TestErrorSpam/pause 0.68
60 TestErrorSpam/unpause 0.63
61 TestErrorSpam/stop 64.25
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 80.3
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.64
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.66
73 TestFunctional/serial/CacheCmd/cache/add_local 1.13
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.64
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.75
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 37.37
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.63
85 TestFunctional/serial/InvalidService 4.32
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 11.73
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.23
96 TestFunctional/parallel/AddonsCmd 0.09
97 TestFunctional/parallel/PersistentVolumeClaim 25.03
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.38
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.36
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
111 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/Version/short 0.07
113 TestFunctional/parallel/Version/components 0.15
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.06
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.94
119 TestFunctional/parallel/ImageCommands/Setup 1.84
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.12
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.13
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.16
127 TestFunctional/parallel/DockerEnv/bash 0.25
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.1
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
142 TestFunctional/parallel/MountCmd/any-port 4.39
143 TestFunctional/parallel/MountCmd/specific-port 0.83
144 TestFunctional/parallel/MountCmd/VerifyCleanup 0.77
145 TestFunctional/parallel/ServiceCmd/DeployApp 8.09
146 TestFunctional/parallel/ServiceCmd/List 0.29
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.1
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
152 TestFunctional/parallel/ProfileCmd/profile_list 0.12
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 180.04
161 TestMultiControlPlane/serial/DeployApp 4.19
162 TestMultiControlPlane/serial/PingHostFromPods 0.73
163 TestMultiControlPlane/serial/AddWorkerNode 86.76
164 TestMultiControlPlane/serial/NodeLabels 0.8
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.27
166 TestMultiControlPlane/serial/CopyFile 4.1
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 151.1
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.22
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 0.96
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.47
277 TestNoKubernetes/serial/Stop 1.93
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
294 TestStartStop/group/old-k8s-version/serial/Stop 3.23
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/no-preload/serial/Stop 3.33
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
316 TestStartStop/group/embed-certs/serial/Stop 3.4
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.79
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
336 TestStartStop/group/newest-cni/serial/Stop 3.99
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-511000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-511000: exit status 85 (94.973083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-511000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT |          |
	|         | -p download-only-511000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 09:47:30
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 09:47:30.873357    2056 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:47:30.873490    2056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:30.873494    2056 out.go:358] Setting ErrFile to fd 2...
	I0816 09:47:30.873497    2056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:30.873627    2056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	W0816 09:47:30.873712    2056 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19461-1189/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19461-1189/.minikube/config/config.json: no such file or directory
	I0816 09:47:30.874950    2056 out.go:352] Setting JSON to true
	I0816 09:47:30.892555    2056 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1013,"bootTime":1723825837,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 09:47:30.892628    2056 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:47:30.897982    2056 out.go:97] [download-only-511000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 09:47:30.898151    2056 notify.go:220] Checking for updates...
	W0816 09:47:30.898177    2056 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 09:47:30.900868    2056 out.go:169] MINIKUBE_LOCATION=19461
	I0816 09:47:30.903957    2056 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 09:47:30.907823    2056 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 09:47:30.910911    2056 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:47:30.913919    2056 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	W0816 09:47:30.919884    2056 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 09:47:30.920131    2056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:47:30.924981    2056 out.go:97] Using the qemu2 driver based on user configuration
	I0816 09:47:30.925005    2056 start.go:297] selected driver: qemu2
	I0816 09:47:30.925022    2056 start.go:901] validating driver "qemu2" against <nil>
	I0816 09:47:30.925111    2056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 09:47:30.928842    2056 out.go:169] Automatically selected the socket_vmnet network
	I0816 09:47:30.934316    2056 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0816 09:47:30.934406    2056 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 09:47:30.934510    2056 cni.go:84] Creating CNI manager for ""
	I0816 09:47:30.934532    2056 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0816 09:47:30.934592    2056 start.go:340] cluster config:
	{Name:download-only-511000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-511000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:47:30.939972    2056 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 09:47:30.944934    2056 out.go:97] Downloading VM boot image ...
	I0816 09:47:30.944958    2056 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/iso/arm64/minikube-v1.33.1-1723740674-19452-arm64.iso
	I0816 09:47:43.296113    2056 out.go:97] Starting "download-only-511000" primary control-plane node in "download-only-511000" cluster
	I0816 09:47:43.296144    2056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:43.361607    2056 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 09:47:43.361631    2056 cache.go:56] Caching tarball of preloaded images
	I0816 09:47:43.361830    2056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:43.365892    2056 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 09:47:43.365899    2056 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:43.455485    2056 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0816 09:47:51.797162    2056 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:51.797322    2056 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:52.492236    2056 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0816 09:47:52.492428    2056 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/download-only-511000/config.json ...
	I0816 09:47:52.492445    2056 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/download-only-511000/config.json: {Name:mkc418fcfc00b5e6e5137590cd2b24f7a7265e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 09:47:52.492658    2056 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0816 09:47:52.492856    2056 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0816 09:47:52.906201    2056 out.go:193] 
	W0816 09:47:52.914376    2056 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19461-1189/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960 0x10764f960] Decompressors:map[bz2:0x140008137e0 gz:0x140008137e8 tar:0x14000813790 tar.bz2:0x140008137a0 tar.gz:0x140008137b0 tar.xz:0x140008137c0 tar.zst:0x140008137d0 tbz2:0x140008137a0 tgz:0x140008137b0 txz:0x140008137c0 tzst:0x140008137d0 xz:0x140008137f0 zip:0x14000813800 zst:0x140008137f8] Getters:map[file:0x140017d28a0 http:0x14000546190 https:0x140005461e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0816 09:47:52.914398    2056 out_reason.go:110] 
	W0816 09:47:52.921274    2056 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 09:47:52.925221    2056 out.go:193] 
	
	
	* The control-plane node download-only-511000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-511000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-511000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-418000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-418000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (7.060030458s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-418000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-418000: exit status 85 (75.386083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-511000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT |                     |
	|         | -p download-only-511000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT | 16 Aug 24 09:47 PDT |
	| delete  | -p download-only-511000        | download-only-511000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT | 16 Aug 24 09:47 PDT |
	| start   | -o=json --download-only        | download-only-418000 | jenkins | v1.33.1 | 16 Aug 24 09:47 PDT |                     |
	|         | -p download-only-418000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 09:47:53
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 09:47:53.338726    2083 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:47:53.338835    2083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:53.338839    2083 out.go:358] Setting ErrFile to fd 2...
	I0816 09:47:53.338841    2083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:47:53.338956    2083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 09:47:53.340025    2083 out.go:352] Setting JSON to true
	I0816 09:47:53.356077    2083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1036,"bootTime":1723825837,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 09:47:53.356150    2083 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:47:53.359692    2083 out.go:97] [download-only-418000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 09:47:53.359797    2083 notify.go:220] Checking for updates...
	I0816 09:47:53.362563    2083 out.go:169] MINIKUBE_LOCATION=19461
	I0816 09:47:53.365584    2083 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 09:47:53.369691    2083 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 09:47:53.372600    2083 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:47:53.375589    2083 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	W0816 09:47:53.380059    2083 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 09:47:53.380233    2083 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:47:53.383625    2083 out.go:97] Using the qemu2 driver based on user configuration
	I0816 09:47:53.383636    2083 start.go:297] selected driver: qemu2
	I0816 09:47:53.383639    2083 start.go:901] validating driver "qemu2" against <nil>
	I0816 09:47:53.383693    2083 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 09:47:53.386614    2083 out.go:169] Automatically selected the socket_vmnet network
	I0816 09:47:53.391754    2083 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0816 09:47:53.391852    2083 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 09:47:53.391885    2083 cni.go:84] Creating CNI manager for ""
	I0816 09:47:53.391897    2083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0816 09:47:53.391903    2083 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 09:47:53.391949    2083 start.go:340] cluster config:
	{Name:download-only-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:47:53.395389    2083 iso.go:125] acquiring lock: {Name:mkc573242530815db51d2b2313508a45619bdc04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 09:47:53.398589    2083 out.go:97] Starting "download-only-418000" primary control-plane node in "download-only-418000" cluster
	I0816 09:47:53.398599    2083 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 09:47:53.460359    2083 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 09:47:53.460373    2083 cache.go:56] Caching tarball of preloaded images
	I0816 09:47:53.460540    2083 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0816 09:47:53.465747    2083 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 09:47:53.465755    2083 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:53.554130    2083 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0816 09:47:57.856242    2083 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0816 09:47:57.856435    2083 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19461-1189/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-418000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-418000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-418000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-325000 --alsologtostderr --binary-mirror http://127.0.0.1:49319 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-325000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-325000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-851000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-851000: exit status 85 (58.347792ms)

                                                
                                                
-- stdout --
	* Profile "addons-851000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-851000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-851000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-851000: exit status 85 (62.305917ms)

                                                
                                                
-- stdout --
	* Profile "addons-851000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-851000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (197.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-851000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-851000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m17.851817958s)
--- PASS: TestAddons/Setup (197.85s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 10.203333ms
addons_test.go:913: volcano-controller stabilized in 10.221917ms
addons_test.go:897: volcano-scheduler stabilized in 10.251542ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7ztb6" [855dcb8b-5923-4414-9d0a-10092b0495a6] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005210375s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-9jgfc" [e1996e36-8c9d-4b67-a905-0ab6cec9bed9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004553667s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-v4lfm" [ea265488-fb12-4f08-accb-2cc84ee2f72b] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002708042s
addons_test.go:932: (dbg) Run:  kubectl --context addons-851000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-851000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-851000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [445eb65c-a2d9-4254-bc22-f9630cbef3d1] Pending
helpers_test.go:344: "test-job-nginx-0" [445eb65c-a2d9-4254-bc22-f9630cbef3d1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [445eb65c-a2d9-4254-bc22-f9630cbef3d1] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.005326625s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-851000 addons disable volcano --alsologtostderr -v=1: (10.143517541s)
--- PASS: TestAddons/serial/Volcano (37.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-851000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-851000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.243542ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-gcxj4" [2759e08f-1918-4e55-8190-9ed84b5035bc] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003171125s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5rt9t" [e76f10bb-1daf-49b0-8a99-55cd34a3ae63] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013044459s
addons_test.go:342: (dbg) Run:  kubectl --context addons-851000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-851000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-851000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.296010375s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 ip
2024/08/16 09:52:27 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.62s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-851000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-851000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-851000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1bd99260-fab9-4000-9143-d11fc3cd9e3e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1bd99260-fab9-4000-9143-d11fc3cd9e3e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007164333s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-851000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-851000 addons disable ingress-dns --alsologtostderr -v=1: (1.050173375s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-851000 addons disable ingress --alsologtostderr -v=1: (7.27324675s)
--- PASS: TestAddons/parallel/Ingress (19.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-w5f95" [62a6afa8-4bb7-461b-9154-0d1a3a432bf4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013032417s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-851000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-851000: (5.27343175s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.439834ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-2ssxk" [8ad8b71d-dd52-4a9d-8da4-3e36272d2502] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006531542s
addons_test.go:417: (dbg) Run:  kubectl --context addons-851000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 49.648042ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-851000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-851000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [89897433-c357-4316-9c9b-c018503917ea] Pending
helpers_test.go:344: "task-pv-pod" [89897433-c357-4316-9c9b-c018503917ea] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [89897433-c357-4316-9c9b-c018503917ea] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003874291s
addons_test.go:590: (dbg) Run:  kubectl --context addons-851000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-851000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-851000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-851000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-851000 delete pod task-pv-pod: (1.229261541s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-851000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-851000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-851000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [08bd541b-6bc2-4cf9-b970-f8d2ae0c6fbf] Pending
helpers_test.go:344: "task-pv-pod-restore" [08bd541b-6bc2-4cf9-b970-f8d2ae0c6fbf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [08bd541b-6bc2-4cf9-b970-f8d2ae0c6fbf] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005705375s
addons_test.go:632: (dbg) Run:  kubectl --context addons-851000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-851000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-851000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-851000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.114374333s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-851000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-nxc7t" [ea86c132-3487-499e-b40c-58952698dc7d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-nxc7t" [ea86c132-3487-499e-b40c-58952698dc7d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009667167s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-851000 addons disable headlamp --alsologtostderr -v=1: (5.29376825s)
--- PASS: TestAddons/parallel/Headlamp (16.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-g6z2j" [5bcb5b5d-59ed-4c11-b383-20f69a01de88] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009352125s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-851000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-851000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-851000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-851000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bcfe5d98-0bf4-4540-a9f8-532dfcfbd7b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bcfe5d98-0bf4-4540-a9f8-532dfcfbd7b5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bcfe5d98-0bf4-4540-a9f8-532dfcfbd7b5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005051709s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-851000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 ssh "cat /opt/local-path-provisioner/pvc-75ecaedc-7c4d-4d58-85ee-da5d0190f43f_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-851000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-851000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-851000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.336292583s)
--- PASS: TestAddons/parallel/LocalPath (41.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mbqh4" [810be294-4ed5-41e5-b3ea-d98eae833a41] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010816167s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-851000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-kwgbz" [d0763751-3b3c-489b-a250-f8580bd421c1] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009105625s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-851000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-851000 addons disable yakd --alsologtostderr -v=1: (5.288081208s)
--- PASS: TestAddons/parallel/Yakd (10.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-851000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-851000: (12.211361625s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-851000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-851000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-851000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.7s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.70s)

                                                
                                    
x
+
TestErrorSpam/setup (34.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-796000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-796000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 --driver=qemu2 : (34.640121459s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (34.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 status
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 pause
--- PASS: TestErrorSpam/pause (0.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (64.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 stop: (12.181944084s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 stop: (26.038468625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-796000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-796000 stop: (26.028146542s)
--- PASS: TestErrorSpam/stop (64.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19461-1189/.minikube/files/etc/test/nested/copy/2054/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-435000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0816 09:56:19.049367    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:19.057475    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:19.070870    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:19.094228    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:19.137619    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:19.219993    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:19.383394    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:19.706833    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:20.350303    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:21.633670    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:24.196531    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:29.318678    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 09:56:39.560343    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-435000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m20.297554875s)
--- PASS: TestFunctional/serial/StartWithProxy (80.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-435000 --alsologtostderr -v=8
E0816 09:57:00.041377    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-435000 --alsologtostderr -v=8: (36.641869041s)
functional_test.go:663: soft start took 36.642256834s for "functional-435000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-435000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2688239471/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cache add minikube-local-cache-test:functional-435000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cache delete minikube-local-cache-test:functional-435000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-435000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.421291ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 kubectl -- --context functional-435000 get pods
E0816 09:57:41.003063    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-435000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-435000 get pods: (1.014854208s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-435000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-435000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.368501333s)
functional_test.go:761: restart took 37.36860375s for "functional-435000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-435000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4114033526/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-435000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-435000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-435000: exit status 115 (143.691416ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32522 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-435000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-435000 delete -f testdata/invalidsvc.yaml: (1.086073917s)
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 config get cpus: exit status 14 (29.466625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 config get cpus: exit status 14 (31.690875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-435000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-435000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2923: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-435000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-435000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.625583ms)

                                                
                                                
-- stdout --
	* [functional-435000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 09:59:02.734353    2910 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:59:02.734472    2910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:59:02.734475    2910 out.go:358] Setting ErrFile to fd 2...
	I0816 09:59:02.734478    2910 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:59:02.734597    2910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 09:59:02.735550    2910 out.go:352] Setting JSON to false
	I0816 09:59:02.751971    2910 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1705,"bootTime":1723825837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 09:59:02.752045    2910 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:59:02.756823    2910 out.go:177] * [functional-435000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0816 09:59:02.763737    2910 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 09:59:02.763797    2910 notify.go:220] Checking for updates...
	I0816 09:59:02.771737    2910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 09:59:02.775798    2910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 09:59:02.778771    2910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:59:02.781818    2910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 09:59:02.784707    2910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 09:59:02.788164    2910 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 09:59:02.788408    2910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:59:02.792785    2910 out.go:177] * Using the qemu2 driver based on existing profile
	I0816 09:59:02.799795    2910 start.go:297] selected driver: qemu2
	I0816 09:59:02.799802    2910 start.go:901] validating driver "qemu2" against &{Name:functional-435000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-435000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:59:02.799855    2910 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 09:59:02.805569    2910 out.go:201] 
	W0816 09:59:02.809762    2910 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 09:59:02.813740    2910 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-435000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
E0816 09:59:02.923786    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-435000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-435000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.998791ms)

                                                
                                                
-- stdout --
	* [functional-435000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 09:59:02.384880    2900 out.go:345] Setting OutFile to fd 1 ...
	I0816 09:59:02.384977    2900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:59:02.384980    2900 out.go:358] Setting ErrFile to fd 2...
	I0816 09:59:02.384993    2900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 09:59:02.385149    2900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
	I0816 09:59:02.386582    2900 out.go:352] Setting JSON to false
	I0816 09:59:02.403826    2900 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1705,"bootTime":1723825837,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0816 09:59:02.403910    2900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0816 09:59:02.408413    2900 out.go:177] * [functional-435000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0816 09:59:02.415353    2900 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 09:59:02.415438    2900 notify.go:220] Checking for updates...
	I0816 09:59:02.423292    2900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	I0816 09:59:02.426325    2900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0816 09:59:02.429296    2900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 09:59:02.432331    2900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	I0816 09:59:02.435379    2900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 09:59:02.438651    2900 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0816 09:59:02.438913    2900 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 09:59:02.443240    2900 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0816 09:59:02.450357    2900 start.go:297] selected driver: qemu2
	I0816 09:59:02.450364    2900 start.go:901] validating driver "qemu2" against &{Name:functional-435000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-435000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 09:59:02.450406    2900 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 09:59:02.456274    2900 out.go:201] 
	W0816 09:59:02.460261    2900 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 09:59:02.464231    2900 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3cf8862d-86d8-4966-b9f8-20d44fc0e419] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003627083s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-435000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-435000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-435000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-435000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b3f3f085-c48b-4671-bbd9-97bb024ffcbc] Pending
helpers_test.go:344: "sp-pod" [b3f3f085-c48b-4671-bbd9-97bb024ffcbc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b3f3f085-c48b-4671-bbd9-97bb024ffcbc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008209542s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-435000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-435000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-435000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8d3444f9-0947-4d5f-99c9-bd7d3c909840] Pending
helpers_test.go:344: "sp-pod" [8d3444f9-0947-4d5f-99c9-bd7d3c909840] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8d3444f9-0947-4d5f-99c9-bd7d3c909840] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009863875s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-435000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh -n functional-435000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cp functional-435000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1971353104/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh -n functional-435000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh -n functional-435000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2054/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo cat /etc/test/nested/copy/2054/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2054.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo cat /etc/ssl/certs/2054.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2054.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo cat /usr/share/ca-certificates/2054.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/20542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo cat /etc/ssl/certs/20542.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/20542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo cat /usr/share/ca-certificates/20542.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-435000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 ssh "sudo systemctl is-active crio": exit status 1 (90.118792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-435000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-435000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-435000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-435000 image ls --format short --alsologtostderr:
I0816 09:59:14.916793    2929 out.go:345] Setting OutFile to fd 1 ...
I0816 09:59:14.916962    2929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:14.916970    2929 out.go:358] Setting ErrFile to fd 2...
I0816 09:59:14.916972    2929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:14.917115    2929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
I0816 09:59:14.917573    2929 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:14.917634    2929 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:14.918522    2929 ssh_runner.go:195] Run: systemctl --version
I0816 09:59:14.918531    2929 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/functional-435000/id_rsa Username:docker}
I0816 09:59:14.942872    2929 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-435000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/minikube-local-cache-test | functional-435000 | d3c52e0fec56b | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-435000 | 45333a4c116f1 | 1.41MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-435000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-435000 image ls --format table --alsologtostderr:
I0816 09:59:17.060605    2941 out.go:345] Setting OutFile to fd 1 ...
I0816 09:59:17.060744    2941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:17.060747    2941 out.go:358] Setting ErrFile to fd 2...
I0816 09:59:17.060749    2941 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:17.060886    2941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
I0816 09:59:17.061283    2941 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:17.061343    2941 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:17.062128    2941 ssh_runner.go:195] Run: systemctl --version
I0816 09:59:17.062136    2941 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/functional-435000/id_rsa Username:docker}
I0816 09:59:17.082906    2941 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-435000 image ls --format json --alsologtostderr:
[{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"3d18732f8686cc3c878
055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["reg
istry.k8s.io/pause:3.10"],"size":"514000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"45333a4c116f1436ce5e0328c56c5bfff451e62c0a586eac5766819ba54285b8","repoDigests":[],"repoTags":["localhost/my-image:functional-435000"],"size":"1410000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"d3c52e0fec56b041fa01bc19cc3a03e66b9be524b2cfec71a560e97062e97448","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-435000"],"size":"30"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-435000"],"size"
:"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-435000 image ls --format json --alsologtostderr:
I0816 09:59:16.994190    2939 out.go:345] Setting OutFile to fd 1 ...
I0816 09:59:16.994327    2939 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:16.994331    2939 out.go:358] Setting ErrFile to fd 2...
I0816 09:59:16.994333    2939 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:16.994458    2939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
I0816 09:59:16.994847    2939 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:16.994915    2939 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:16.995692    2939 ssh_runner.go:195] Run: systemctl --version
I0816 09:59:16.995708    2939 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/functional-435000/id_rsa Username:docker}
I0816 09:59:17.018570    2939 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-435000 image ls --format yaml --alsologtostderr:
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: d3c52e0fec56b041fa01bc19cc3a03e66b9be524b2cfec71a560e97062e97448
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-435000
size: "30"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-435000
size: "4780000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-435000 image ls --format yaml --alsologtostderr:
I0816 09:59:14.987351    2931 out.go:345] Setting OutFile to fd 1 ...
I0816 09:59:14.987531    2931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:14.987537    2931 out.go:358] Setting ErrFile to fd 2...
I0816 09:59:14.987540    2931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:14.987729    2931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
I0816 09:59:14.988195    2931 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:14.988255    2931 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:14.989054    2931 ssh_runner.go:195] Run: systemctl --version
I0816 09:59:14.989063    2931 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/functional-435000/id_rsa Username:docker}
I0816 09:59:15.009805    2931 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 ssh pgrep buildkitd: exit status 1 (54.391584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image build -t localhost/my-image:functional-435000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-435000 image build -t localhost/my-image:functional-435000 testdata/build --alsologtostderr: (1.812221584s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-435000 image build -t localhost/my-image:functional-435000 testdata/build --alsologtostderr:
I0816 09:59:15.113800    2935 out.go:345] Setting OutFile to fd 1 ...
I0816 09:59:15.114029    2935 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:15.114034    2935 out.go:358] Setting ErrFile to fd 2...
I0816 09:59:15.114037    2935 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 09:59:15.114172    2935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19461-1189/.minikube/bin
I0816 09:59:15.114600    2935 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:15.115302    2935 config.go:182] Loaded profile config "functional-435000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0816 09:59:15.116156    2935 ssh_runner.go:195] Run: systemctl --version
I0816 09:59:15.116165    2935 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19461-1189/.minikube/machines/functional-435000/id_rsa Username:docker}
I0816 09:59:15.137480    2935 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2221715921.tar
I0816 09:59:15.137523    2935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 09:59:15.141602    2935 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2221715921.tar
I0816 09:59:15.143301    2935 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2221715921.tar: stat -c "%s %y" /var/lib/minikube/build/build.2221715921.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2221715921.tar': No such file or directory
I0816 09:59:15.143320    2935 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2221715921.tar --> /var/lib/minikube/build/build.2221715921.tar (3072 bytes)
I0816 09:59:15.152184    2935 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2221715921
I0816 09:59:15.155450    2935 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2221715921 -xf /var/lib/minikube/build/build.2221715921.tar
I0816 09:59:15.158777    2935 docker.go:360] Building image: /var/lib/minikube/build/build.2221715921
I0816 09:59:15.158834    2935 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-435000 /var/lib/minikube/build/build.2221715921
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:45333a4c116f1436ce5e0328c56c5bfff451e62c0a586eac5766819ba54285b8 done
#8 naming to localhost/my-image:functional-435000 done
#8 DONE 0.0s
I0816 09:59:16.883068    2935 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-435000 /var/lib/minikube/build/build.2221715921: (1.724265208s)
I0816 09:59:16.883135    2935 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2221715921
I0816 09:59:16.887045    2935 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2221715921.tar
I0816 09:59:16.890536    2935 build_images.go:217] Built localhost/my-image:functional-435000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2221715921.tar
I0816 09:59:16.890560    2935 build_images.go:133] succeeded building to: functional-435000
I0816 09:59:16.890563    2935 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.82273225s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-435000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image load --daemon kicbase/echo-server:functional-435000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image load --daemon kicbase/echo-server:functional-435000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-435000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image load --daemon kicbase/echo-server:functional-435000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image save kicbase/echo-server:functional-435000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image rm kicbase/echo-server:functional-435000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-435000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 image save --daemon kicbase/echo-server:functional-435000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-435000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-435000 docker-env) && out/minikube-darwin-arm64 status -p functional-435000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-435000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-435000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-435000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-435000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2764: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-435000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-435000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-435000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [56da760b-cb4a-4ad3-9e7b-575beabb8234] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [56da760b-cb4a-4ad3-9e7b-575beabb8234] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.006337542s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-435000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.161.138 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-435000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2397555710/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723827527085928000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2397555710/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723827527085928000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2397555710/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723827527085928000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2397555710/001/test-1723827527085928000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.761292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 16:58 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 16:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 16:58 test-1723827527085928000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh cat /mount-9p/test-1723827527085928000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-435000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a92fae9a-69e1-4df5-918d-e0727f800f8e] Pending
helpers_test.go:344: "busybox-mount" [a92fae9a-69e1-4df5-918d-e0727f800f8e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a92fae9a-69e1-4df5-918d-e0727f800f8e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a92fae9a-69e1-4df5-918d-e0727f800f8e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003478083s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-435000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2397555710/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1046942085/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.262583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1046942085/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 ssh "sudo umount -f /mount-9p": exit status 1 (57.218375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-435000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1046942085/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup880919694/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup880919694/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup880919694/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T" /mount1: exit status 1 (62.311542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-435000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup880919694/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup880919694/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-435000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup880919694/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-435000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-435000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-chfd8" [47f772f8-ae5b-46eb-af5b-0ca0dedb72c9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-chfd8" [47f772f8-ae5b-46eb-af5b-0ca0dedb72c9] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004626875s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 service list -o json
functional_test.go:1494: Took "280.62175ms" to run "out/minikube-darwin-arm64 -p functional-435000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30449
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-435000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30449
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "84.029709ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.684542ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "79.854875ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "32.337791ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-435000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-435000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-435000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-881000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0816 10:01:19.048574    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:01:46.793167    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-881000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m59.850540041s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (180.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-881000 -- rollout status deployment/busybox: (2.748866083s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-5jjhw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-v298s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-x9772 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-5jjhw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-v298s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-x9772 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-5jjhw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-v298s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-x9772 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-5jjhw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-5jjhw -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-v298s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-v298s -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-x9772 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-881000 -- exec busybox-7dff88458-x9772 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (86.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-881000 -v=7 --alsologtostderr
E0816 10:03:25.943035    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:25.950654    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:25.963620    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:25.987010    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:26.030399    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:26.113753    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:26.277198    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:26.600549    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:27.242117    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:28.525566    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:31.087476    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:36.208845    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
E0816 10:03:46.452159    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/functional-435000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-881000 -v=7 --alsologtostderr: (1m26.545304375s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (86.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-881000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp testdata/cp-test.txt ha-881000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1542642167/001/cp-test_ha-881000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000:/home/docker/cp-test.txt ha-881000-m02:/home/docker/cp-test_ha-881000_ha-881000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test_ha-881000_ha-881000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000:/home/docker/cp-test.txt ha-881000-m03:/home/docker/cp-test_ha-881000_ha-881000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test_ha-881000_ha-881000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000:/home/docker/cp-test.txt ha-881000-m04:/home/docker/cp-test_ha-881000_ha-881000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test_ha-881000_ha-881000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp testdata/cp-test.txt ha-881000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1542642167/001/cp-test_ha-881000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m02:/home/docker/cp-test.txt ha-881000:/home/docker/cp-test_ha-881000-m02_ha-881000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test_ha-881000-m02_ha-881000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m02:/home/docker/cp-test.txt ha-881000-m03:/home/docker/cp-test_ha-881000-m02_ha-881000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test_ha-881000-m02_ha-881000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m02:/home/docker/cp-test.txt ha-881000-m04:/home/docker/cp-test_ha-881000-m02_ha-881000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test_ha-881000-m02_ha-881000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp testdata/cp-test.txt ha-881000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1542642167/001/cp-test_ha-881000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m03:/home/docker/cp-test.txt ha-881000:/home/docker/cp-test_ha-881000-m03_ha-881000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test_ha-881000-m03_ha-881000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m03:/home/docker/cp-test.txt ha-881000-m02:/home/docker/cp-test_ha-881000-m03_ha-881000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test_ha-881000-m03_ha-881000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m03:/home/docker/cp-test.txt ha-881000-m04:/home/docker/cp-test_ha-881000-m03_ha-881000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test_ha-881000-m03_ha-881000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp testdata/cp-test.txt ha-881000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1542642167/001/cp-test_ha-881000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m04:/home/docker/cp-test.txt ha-881000:/home/docker/cp-test_ha-881000-m04_ha-881000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000 "sudo cat /home/docker/cp-test_ha-881000-m04_ha-881000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m04:/home/docker/cp-test.txt ha-881000-m02:/home/docker/cp-test_ha-881000-m04_ha-881000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m02 "sudo cat /home/docker/cp-test_ha-881000-m04_ha-881000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 cp ha-881000-m04:/home/docker/cp-test.txt ha-881000-m03:/home/docker/cp-test_ha-881000-m04_ha-881000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-881000 ssh -n ha-881000-m03 "sudo cat /home/docker/cp-test_ha-881000-m04_ha-881000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (151.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m31.095974792s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (151.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-336000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-336000 --output=json --user=testUser: (3.223636292s)
--- PASS: TestJSONOutput/stop/Command (3.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-932000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-932000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.925417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"156052a2-06af-47a8-bc38-9664fc1b9cb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-932000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d46eb7ef-88f0-48f6-9812-c2b1a239e047","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"eb21ced0-b148-47e2-a813-cd93b6a4f38f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig"}}
	{"specversion":"1.0","id":"0996d14f-2251-43f5-8d40-6bb1e43b870d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"62d12a53-0afa-461b-8e60-a3cdceb5f018","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eac6202c-cb14-4f45-963b-ef19d622e922","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube"}}
	{"specversion":"1.0","id":"d66df227-2959-49c2-9d71-0876d073fc0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6d64ab28-8711-4e9d-8a18-7882df9126b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-932000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-283000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.873625ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-283000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19461-1189/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19461-1189/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-283000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-283000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.820791ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-283000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-283000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E0816 10:41:19.001129    2054 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19461-1189/.minikube/profiles/addons-851000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.6785635s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.793050583s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-283000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-283000: (1.933524583s)
--- PASS: TestNoKubernetes/serial/Stop (1.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-283000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-283000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.614667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-283000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-283000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-403000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-782000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-782000 --alsologtostderr -v=3: (3.229549167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-782000 -n old-k8s-version-782000: exit status 7 (30.410083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-782000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-873000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-873000 --alsologtostderr -v=3: (3.325996958s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-873000 -n no-preload-873000: exit status 7 (59.434542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-873000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-573000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-573000 --alsologtostderr -v=3: (3.397237125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-573000 -n embed-certs-573000: exit status 7 (55.821917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-573000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-353000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-353000 --alsologtostderr -v=3: (3.793645459s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-353000 -n default-k8s-diff-port-353000: exit status 7 (54.075791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-353000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-972000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-972000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-972000 --alsologtostderr -v=3: (3.986094208s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-972000 -n newest-cni-972000: exit status 7 (59.00175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-972000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-122000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-122000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-122000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-122000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-122000"

                                                
                                                
----------------------- debugLogs end: cilium-122000 [took: 2.173829083s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-122000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-122000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-437000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-437000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard